专利摘要:
LAST SIGNIFICANT COEFFICIENT POSITION CODING OF A VIDEO BLOCK BASED ON A SCAN ORDER FOR THE VIDEO ENCODING BLOCK In one example, an apparatus is described for encoding coefficients associated with a video data block during an encoding process video, wherein the apparatus includes a video encoder configured to encode x and y coordinates, which indicate a position of the last non-zero coefficient within the block according to a scan order associated with the block when the scan order comprises a first order of scanning. scan, and code interchanged x and y coordinates, which indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order, in which the second scan wave is different from the first scan order.
公开号:BR112013013650B1
申请号:R112013013650-2
申请日:2011-11-30
公开日:2021-03-23
发明作者:Muhammed Zeyd Coban;Yunfei Zheng;Rajan Laxman Joshi;Marta Karczewicz;Joel Sole Rojals
申请人:Velos Media International Limited;
IPC主号:
专利说明:

[0001] The application claims the benefit of US provisional application No. 61 / 419,740, filed on December 3, 2010, US provisional application No. 61 / 426,426, filed on December 22, 2010, US provisional application No. 61 / 426,360 , filed on December 22, 2010, and US Provisional Application No. 61 / 426,372, filed on December 22, 2010, the entire contents of each of which are hereby incorporated by reference. FIELD OF THE INVENTION
[0002] This description refers to the video encoding and, more specifically, the encoding of syntax information related to the coefficients of a video block. DESCRIPTION OF THE PREVIOUS TECHNIQUE
[0003] Digital video capabilities can be incorporated into a wide range of devices, which include digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-readers. -books, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cell phones or satellite radio, so-called “smartphones”, video teleconferencing devices, video devices streaming video and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4, Part 10, Encoding Advanced Video (AVC), the High Efficiency Video Coding (HEVC) standard currently under development and extensions to such standards.
[0004] Video devices can transmit, receive, encode, decode and / or store digital video information more effectively by implementing such video compression techniques.
[0005] Video compression techniques perform spatial prediction (intra-image) and / or temporal prediction (inter-image) to reduce or remove the redundancy inherent in video sequences. For block-based video encoding, a video slice (that is, a video frame or part of a video frame) can be partitioned into video blocks, which can also be referred to as tree blocks, encoding units (CUs) and / or coding nodes. The video blocks in an intracoded (I) slice of an image are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same image. Video blocks in an intercoded slice (P or B) of an image can use spatial prediction with respect to reference samples in neighboring blocks in the same image or temporal prediction with respect to reference samples in other reference images. The images can be referred to as frames and the reference images can be referred to as frames of reference.
[0006] The spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be encoded and the predictive block. An intercoded block is coded according to a motion vector that points to a block of reference samples that form the predictive block, and the residual data indicating the difference between the coded block and the predictive block. An intra-encoded block is encoded according to an intra-encoding mode and residual data. For additional compression, residual data can be transformed from the pixel domain into a transform domain, resulting in residual transform coefficients, which can then be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, can be scanned to produce a one-dimensional vector of transform coefficients, and entropy coding can be applied to obtain even more compaction. SUMMARY OF THE INVENTION
[0007] This description describes techniques for encoding coefficients associated with a video data block during a video encoding process, which include techniques for encoding information that identifies the position of a last non-zero, or “significant” coefficient, within the agreement block. with a scan order associated with the block, that is, position information of the last significant coefficient for the block. The techniques of this description can improve the effectiveness of encoding position information of the last significant coefficient for blocks of video data used to encode the blocks by encoding position information of the last significant coefficient for a specific block based on information that identifies the scan order associated with the block, that is, information about scan order for the block. In other words, the techniques can improve the compression of the position information of the last significant coefficient for the block when the information is encoded. The techniques of this description can also allow the coding systems to have less complexity in relation to other systems, when coding the position information of the last significant coefficient for the blocks, by coding the position information of the last significant coefficient for a specific block. with the use of common statistics when one of a series of scan orders is used to code the block.
[0008] In one example, the coding efficiency can be improved, and the complexity of the coding system can be reduced, by coding x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to an order of scan associated with the block when the scan order comprises a first scan order, and by encoding "exchanged" or exchanged x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order when the scanning order scan comprises a second scan order.
[0009] In this example, the first and second scan orders can be symmetrical with respect to each other (or at least partially symmetrical). Because of the symmetry between the first and second scan orders, the probability of the x coordinate comprising a given value when the scan order comprises the first scan order can be identical or similar to the probability of the coordinate y comprising the same value when the order scan comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. In other words, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the exchanged x and y coordinates, respectively, when the order of scanning scan comprises the second scan order. Thus, the x and y coordinates and the exchanged x and y coordinates can be encoded using common statistics for purposes of context-adaptive entropy coding, which may result in the use of coding systems that are less complex compared to other systems. In addition, common statistics can be updated based on the x and y coordinates and the exchanged x and y coordinates, which can result in greater accuracy of the statistics compared to similar statistics using other techniques and, therefore, in the more effective coding of the respective coordinates.
[0010] In another example, the coding efficiency can be improved by encoding the position information of the last significant coefficient for a block of video data incrementally, to the extent necessary, which can result in the most effective encoding of the information. In addition, in cases where it is necessary to encode the information in its entirety, the coding efficiency can be improved by encoding the information with the use of context-adaptive entropy coding, such that the statistics used to encode the information are selected with base, at least in part, on the scan order associated with the block. Coding the information in this way can result in the use of more accurate statistics than when using other methods and, again, in the more efficient coding of the position information of the last significant coefficient for the block.
[0011] The techniques in this description can be used with any context-adaptive entropy coding methodology, including CABAC, probability interval partitioning (PIPE) entropy coding, or other context-adaptive entropy coding methodology. CABAC is described in this description for purposes of example, but without limitation as to the techniques widely described in this description. In addition, the techniques can be applied to the encoding of other types of data in general, such as, for example, in addition to video data.
[0012] Therefore, the techniques of this description may allow the use of more efficient coding methods in relation to other methods and the use of coding systems which are less complex in relation to other systems, when coding the position information of the last significant coefficient. for one or more blocks of video data. In this way, there can be a relative bit saving for an encoded bit stream that includes the information and a relative reduction in complexity for the system used to encode the information, when using the techniques of this description.
[0013] In one example, a method for encoding coefficients associated with a video data block during a video encoding process includes encoding x and y coordinates that indicate a position of the last non-zero coefficient within the block according to a scan order associated with the block when the scan order comprises a first scan order and code interchanged x and y coordinates, which indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0014] In another example, an apparatus for encoding coefficients associated with a video data block during a video encoding process includes a video encoder configured to encode x and y coordinates that indicate a position of the last non-zero coefficient within the block according to a scan order associated with the block when the scan order comprises a first scan order and code interchanged x and y coordinates, which indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0015] In another example, a device for encoding coefficients associated with a video data block during a video encoding process includes mechanisms for encoding x and y coordinates that indicate a position of the last non-zero coefficient within the block according to an associated scan order. to the block when the scan order comprises a first scan order and mechanisms for encoding interchanged x and y coordinates, which indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second order of scanning. scan, where the second scan order is different from the first scan order.
[0016] The techniques described in this description can be implemented in hardware, software, firmware or combinations thereof. If implemented in hardware, a device can be realized as an integrated circuit, a processor, a discrete logic or any combination of them. If implemented in software, the software can run on one or more processors, such as a microprocessor, an application specific integrated circuit (ASIC), a field programmable port array (FPGA) or a digital signal processor (DSP) . The software that performs the techniques can initially be stored in a tangible computer-readable medium and loaded and run on the processor.
[0017] Therefore, this description also contemplates a computer-readable medium that comprises instructions that, when executed, cause a processor to encode coefficients associated with a block of video data during a video encoding process, in which the instructions cause the processor encodes x and y coordinates that indicate a position of the last nonzero coefficient within the block according to a scan order associated with the block when the scan order comprises a first scan order and encodes permuted x and y coordinates, which indicate a position of the last nonzero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0018] Details of one or more examples are presented in the accompanying drawings and in the description that follows. Other features, objects and advantages will be evident with the description and drawings, and with the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0019] Figure 1 is a block diagram showing an example of an encoding and decoding system that can implement techniques to effectively encode position information for the last significant coefficient based on scan order information for a video data block, compatible with the techniques of this description. Figure 2 is a block diagram showing an example of a video encoder that can implement techniques to effectively encode position information of the last significant coefficient based on scan order information for a block of video data, compatible with the techniques of this description. Figure 3 is a block diagram showing an example of a video decoder that can implement techniques to effectively decode position information from the last significant coefficient encoded based on scan order information for a compatible video data block. with the techniques of this description. Figures 4A-4C are conceptual diagrams showing an example of a block of video data and position information of significant coefficients and position information of the last corresponding significant coefficient. Figures 5A-5C are conceptual diagrams showing examples of video data blocks scanned using a zigzag scan order, a horizontal scan order and a vertical scan order. Figures 6A-6C are conceptual diagrams showing examples of video data blocks for which position information of the last significant coefficient is stored based on scan order information, in accordance with the techniques of this description. Figure 7 is a flowchart showing an example of a method to effectively encode position information for the last significant coefficient based on scan order information for a block of video data, in accordance with the techniques of this description. Figure 8 is a flowchart showing an example of a method to effectively encode position information for the last significant coefficient based on scan order information for a block of video data, in accordance with the techniques of this description. Figure 9 is a flow chart showing an example of a method for effectively decoding position information from the last significant coded coefficient based on scan order information for a block of video data, in accordance with the techniques of this description. Figure 10 is a flowchart showing another example of a method to effectively encode position information for the last significant coefficient based on scan order information for a block of video data, in accordance with the techniques of this description. Figure 11 is a flowchart showing another example of a method for effectively decoding position information of the last significant coded coefficient based on scan order information for a block of video data, in accordance with the techniques of this description. DETAILED DESCRIPTION OF THE INVENTION
[0020] This description describes techniques for encoding coefficients associated with a video data block during a video encoding process, which include techniques for encoding information that identifies the position of a last non-zero, or “significant” coefficient, within the agreement block. with a scan order associated with the block, that is, position information of the last significant coefficient for the block. The techniques of this description can improve the effectiveness of encoding position information of the last significant coefficient for blocks of video data used to encode the blocks by encoding position information of the last significant coefficient for a specific block based on information that identifies the scan order associated with the block, that is, information about scan order for the block. In other words, the techniques can improve the compression of the position information of the last significant coefficient for the blocks when the information is encoded. The techniques of this description can also allow the use of coding systems that have less complexity in relation to other systems, when encoding the position information of the last significant coefficient for the blocks, by encoding the position information of the last significant coefficient for a specific block with the use of common statistics when one of a series of scan orders is used to code the block.
[0021] In this description, the term "encoding" refers to the encoding that occurs in the encoder or the decoding that occurs in the decoder. Likewise, the term "encoder" refers to a combined encoder, decoder or encoder / decoder ( "CODEC”). The terms encoder, decoder and CODEC all refer to specific machines designed for the encoding (encoding and / or decoding) of video data compatible with this description.
[0022] In general, empirical tests carried out during the development of these techniques have demonstrated a correlation between position information of the last significant coefficient and information on scan order for a block of video data. For example, a position of the last significant coefficient within a video data block according to a scan order associated with the block, that is, the scan order used to encode the block, may depend on the scan order. In other words, the statistics that indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order can vary depending on the scan order that is used to code the block. Therefore, the encoding of the position information of the last significant coefficient for the block using context-adaptive entropy coding, such that the statistics used to encode the information are selected based, at least in part, on the scan order information for the block, can lead to obtaining more accurate statistics and can thus result in the most effective encoding of the position information of the last significant coefficient.
[0023] In addition, according to the techniques of this description, the position information of the last significant coefficient for a block of video data can be encoded using x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order. associated with the block. In these cases, the statistics described above may indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, to understand a given value ( such as "0", "1", "2", etc.). Since some scan orders, such as a first scan order and a second scan order, can be symmetrical one with respect to the other (or at least partially symmetric), the probability of the x coordinate comprising a given value when the scan order comprises the first scan order may be identical or similar to the probability of the coordinate y comprising the same value when the order of scanning scan comprises the second scan order, and vice versa. Similarly, the probability of the y-coordinate to comprise a given value when the scan order comprises the first scan order can be identical or without According to the probability of the x-coordinate understanding the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the "exchanged" or exchanged x and y coordinates, respectively, when the scan order comprises the second scan order, so the x and y coordinates and the swapped x and y coordinates can be coded using common statistics.
[0024] Therefore, the encoding of the x and y coordinates, when the scan order comprises the first scan order, and the encoding of the exchanged x and y coordinates, when the second scan order comprises the second scan order, using common statistics, can result in reducing the complexity of the coding system. In addition, updating the common statistics based on the x and y coordinates and the exchanged x and y coordinates can lead to more accurate statistics, which can result, once again, in the most effective encoding of the position information of the last significant coefficient.
[0025] As an example, the techniques of this description can improve the coding efficiency and reduce the complexity of the system by coding x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block when the scan order comprises a first scan order and by encoding interchanged x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order when the scan order comprises the second scan order.
[0026] In this example, the x and y coordinates and the interchanged x and y coordinates can be encoded using common statistics for the purpose of context-adaptive entropy coding, which may result in the use of coding systems that are less complex compared to other systems, such as, for example , systems that include separate statistics for each scan order that can be used within the systems to encode blocks of video data. In addition, common statistics can be updated based on the x and y coordinates and the exchanged x and y coordinates, which can result in greater accuracy of the statistics compared to that of similar statistics updated using other techniques, such as, for example, statistics updated for an order of specific scan that can be used within a system to encode blocks of video data. Consequently, the x and y coordinates and the exchanged x and y coordinates, that is, the position information of the last significant coefficient for the block, can be encoded more effectively than similar information encoded using other methods.
[0027] As another example, the techniques of this description can improve the coding efficiency by encoding the position information of the last significant coefficient for a block of video data incrementally, to the extent necessary. Consequently, the position information of the last significant coefficient can be encoded using less information than when using other techniques, such as, for example, always encoding the position information of the last significant coefficient for the block in its entirety. In addition, in cases where it is necessary to encode the position information of the last significant coefficient in its entirety, the coding efficiency can be improved by encoding the information using context-adaptive entropy coding, such that the statistics used to encode the information are selected, at least in part, in scan order information for the block. The encoding of the position information of the last significant coefficient in this way can result in the use of more accurate statistics than when using other methods, such as, for example, the selection of statistics without considering the scan order information for the block, and , again, in the most effective encoding of the position information of the last significant coefficient.
[0028] In the examples described above, to encode position information for the last significant coefficient for a video data block using statistics, the information can be encoded by performing a context-adaptive binary arithmetic (CABAC) encoding process, which includes apply a context model based on one or more contexts. In other examples, other context-adaptive entropy coding processes, such as context-adaptive variable length coding (CAVLC), entropy coding with probability interval partitioning (PIPE), and other context-adaptive entropy coding processes, they can also use the techniques of this description. CABAC is described in this description for purposes of example, but without limitation as to the techniques widely described in this description. In addition, the techniques can be applied to the encoding of other types of data in general, such as, for example, in addition to video data.
[0029] The encoding of position information of the last significant coefficient for one or more blocks of video data in the manner described above may allow the use of more effective encoding methods compared to other methods and the use of encoding systems that are less complex than that of other systems. In this way, there can be a relative bit saving for an encoded bit stream that includes the information and a relative reduction in complexity for the system used to encode the information, when using the techniques of this description.
[0030] Figure 1 is a block diagram showing an example of a video encoding and decoding system 10 that can implement techniques for encoding position information from the last significant coefficient to a video data block before encoding coefficient position information. significant for the block, compatible with the techniques of this description. As shown in Figure 1, system 10 includes a source device 12, which transmits encoded videos to a destination device 14 via a communication channel 16. The source device 12 and the destination device 14 can comprise any one of a range of devices. In some cases, the source device 12 and the destination device 14 may comprise wireless communication devices, such as wireless telephone devices, so-called cell phones or satellite radio or any wireless devices that can communicate video information through a communication channel 16, in which case the communication channel 16 is wireless.
[0031] The techniques in this description, however, which concern the encoding of position information of the last significant coefficient based on scan order information for a block of video data, are not necessarily limited to wireless applications or configurations. These techniques can generally be applied to any scenario in which encoding or decoding is carried out, including broadcast television over the air, cable television broadcasts, satellite television broadcasts, streaming Internet video streams , encoded digital video that is encoded on a storage medium or retrieved and decoded from a storage medium, or other scenarios. Therefore, the communication channel 16 is not necessary and the techniques of this description can be applied to configurations in which the encoding is applied or in which the decoding is applied, such as, for example, without any data communication between the communication devices. encoding and decoding.
[0032] In the example in Figure 1, the source device 12 includes a video source 18, a video encoder 20, a modulator / demodulator (modem) 22 and a transmitter 24. The destination device 14 includes a receiver 26, a modem 28 , a video decoder 30 and a display device 32. According to this discovery, the video encoder 20 of the source device 12 and / or the video decoder 30 of the destination device 14 can be configured to apply the techniques for encode position information for the last significant coefficient based on scan order information for a block of video data. In other examples, a source device and a target device may include other components or arrangements. For example, the source device 12 can receive video data from an external video source 18, such as an external camera. In the same way, the target device 14 can interface with an external display device, instead of including an integrated display device.
[0033] The system 10 shown in Figure 1 is merely an example. Techniques to effectively encode position information of the last significant coefficient based on scan order information for a block of video data can be performed by any digital video encoding and / or decoding device. While the techniques in this description are generally performed by a video encoding device, the techniques can also be performed by a video encoder / decoder, typically referred to as "CODEC". Furthermore, the techniques in this description can also be performed by a video processor.The source device 12 and the target device 14 are merely examples of such encoding devices in which the source device 12 generates encoded video data for transmission to the target device 14. In some examples, the devices 12, 14 can operate in a substantially symmetrical manner, such that each of the devices 12, 14 includes video encoding and decoding components.
[0034] Consequently, system 10 can support unidirectional or bidirectional video transmission between video devices 12, 14, such as, for example, video streaming, video replay, video broadcasting, or video telephony.
[0035] The video source 18 of the source device 12 can include a video capture device, such as a video camera, a video file that contains previously captured video and / or a video feed from a video content provider. As another alternative, video source 18 can generate data based on computer graphics as the video source, or a combination of live video, archived video and computer generated videos. In some cases, if the video source 18 is a video camera, the source device 12 and the destination device 14 can form so-called camera phones or video phones. As mentioned above, however, the techniques described in this description are applicable to video encoding in general, and can be applied to wireless and / or wired applications. In each case, the captured, pre-captured or computer generated video can be encoded by video encoder 20. The encoded video information can then be modulated by modem 22 according to a communication standard and transmitted to the communication device. destination 14 via transmitter 24. Modem 22 may include various mixers, filters, amplifiers or other components designed for signal modulation. Transmitter 24 may include circuits designed to transmit data, including amplifiers, filters and one or more antennas.
[0036] Receiver 26 of destination device 14 receives information over channel 16, and modem 28 demodulates the information. Again, the video encoding process described above can implement one or more of the techniques described here to effectively encode position information for the last significant coefficient based on scan order information. Information communicated through channel 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, which includes elements of syntax that describe characteristics and / or processing of video data blocks (macroblocks or coding units, for example), such as, for example, position information of the last significant coefficient and / or scan order information for the blocks, and / or other information. The display device 32 displays the decoded video data to the user and can comprise any of several display devices, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma monitor, an organic light-emitting diode (OLED) device or other type of display device.
[0037] In the example of Figure 1, the communication channel 16 can comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines or any combination of wireless and wired media. Communication channel 16 can be part of a packet-based network, such as a local area network, an extended area network, or a global network such as the Internet. Communication channel 16 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from the source device 12 to the destination device 14, including any suitable combination of wired or wireless media. Communication channel 16 may include routers, switches, base stations or any other equipment that may be useful to facilitate communication from the source device 12 to the destination device 14. In other examples, the encoding or decoding devices may implement the techniques of this description without any communication between such devices. For example, an encoding device can encode and store an encoded bit stream in accordance with the techniques of this description. Alternatively, a decoding device can receive or retrieve an encoded bit stream and decode the bit stream in accordance with the techniques of this description.
[0038] Video encoder 20 and video decoder 30 can operate according to a video compression standard, such as the ITU-TH.264 standard, alternatively referred to as MPEG-4, Part 10, Advanced Video Encoding (AVC) . The techniques of this description, however, are not limited to any specific coding standard. Other examples include MPEG-2, ITU-T H.263 and the High Efficiency Video Coding (HEVC) standard currently in development. In general, the techniques in this description are described with respect to HEVC, but it should be understood that these techniques can be used in conjunction with other video encoding standards as well. Although not shown in Figure 1, in some ways the video encoder 20 and video decoder 30 can each be integrated with an audio encoder and decoder and may include appropriate MUX-DEMUX units, or other hardware and software, to process the encoding of both audio and video in a common data stream or in separate data streams. If applicable, MUX-DEMUX units can conform to the ITU H.223 multiplexer protocol or to other protocols such as the user datagram protocol (UDP).
[0039] The video encoder 20 and video decoder 30 can each be implemented as any one of several sets of suitable encoder and decoder circuits, such as one or more microprocessors, digital signal processors (DSPs), integrated circuit specific application (ASICs), array of field programmable ports (FPGAs), discrete logic, software, hardware, firmware or any combination of them. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, both of which can be integrated as part of a combined encoder / decoder (CODEC) in a respective camera, computer, mobile device, device subscriber, broadcast device, set-top box converter, server or the like.
[0040] A video stream typically includes a series of video frames. A group of images (GOP) generally comprises a series of one or more video frames. A GOP can include syntax data in a GOP header, a header from one or more GOP frames, or elsewhere, which describes several frames included in the GOP. Each frame can include frame syntax data that describes an encoding mode for the respective frame. A video encoder, such as video encoder 20, typically operates on video blocks within individual video frames to encode the video data. According to the ITU-T H.264 standard, a video block can correspond to a macroblock or to a partition of a macroblock. According to other standards, such as HEVC, described in more detail below, a video block can correspond to a coding unit (the largest coding unit, for example) or to a partition of a coding unit . Video blocks can be fixed or variable in size and may differ in size according to a specified encoding standard. Each video frame can include a series of slices, that is, parts of the video frame. Each slice can include a series of video blocks, which can be partitioned, also referred to as sub-blocks.
[0041] Depending on the specified encoding standard, the video blocks can be partitioned into several "NxN" sub-block sizes, such as 16x16, 8x8, 4x4, 2x2 and so on. In this description, "NxN" and "N by N ”Can be used interchangeably to refer to the pixel dimensions of the block in terms of the vertical and horizontal dimensions, such as 16x16 pixels or 16 by 16 pixels. In general, a 16x16 block will have sixteen pixels in the direction vertical (y = 16) and sixteen pixels in the horizontal direction (x = 16.) In the same way, an NxN block usually has N pixels in the vertical direction and N pixels in the horizontal direction, where N represents a non-negative integer value. The pixels in a block can be arranged in rows and columns. Furthermore, it is not necessary for the blocks to have the same number of pixels in the horizontal and vertical directions. For example, the blocks can comprise NxM pixels, where M is not necessarily equal to N. As an example, in ITU-T H.264 standard, blocks that are 16 by 16 pixels in size can be referred to as macroblocks, and blocks that are less than 16 by 16 pixels can be referred to as partitions of a 16 by 16 macroblock In other standards, such as HEVC, blocks can be defined more generally with respect to their size, for example, as encoding units and partitions, each having a variable, rather than fixed, size.
[0042] Video blocks may comprise blocks of pixel data in the pixel domain, or blocks of transform coefficients in the transform domain, such as, for example, following the application of a transform, such as a discrete cosine transform (DCT) , an integer transform, a wavelet transform or a conceptually similar transform, to residual data for a given video block, where the residual data represents pixel differences between the video data for the block and the predictive data generated for the block. In some cases, the video blocks may comprise blocks of transform coefficients quantized in the transform domain, where, after applying a transform to the residual data for a given video block, the resulting transform coefficients are also quantized.
[0043] Block partitioning serves an important purpose in block-based video encoding techniques. The use of smaller blocks to encode video data can result in better data prediction for locations in a video frame that include high levels of detail and can therefore reduce the resulting error (i.e., bypassing data prediction data source video), represented as residual data. While potentially reducing residual data, such techniques may nevertheless require additional syntax information to indicate how the smaller blocks are partitioned with respect to a video frame and can result in an increased bit rate for the encoded video. Therefore, in some techniques, block partitioning may depend on balancing the desirable reduction in residual data with the resulting increase in the bit rate encoded video data due to the additional syntax information.
[0044] In general, their blocks and their various partitions (that is, sub-blocks) can be considered video blocks. In addition, a slice can be considered to be a series of video blocks (macroblocks or coding units, for example) and / or subblocks (partitions of macroblocks or coding units, for example). Each slice can be a decodable unit independently of a video frame. Alternatively, the frames themselves can be decodable units or other parts of a frame can be defined as decodable units. In addition, a GOP, also referred to as a sequence, can be defined as a decodable unit.
[0045] Efforts are currently underway to develop a new standard for video coding, currently referred to as High Efficiency Video Coding (HEVC). The emerging HEVC standard can also be referred to as H.265. Standardization efforts are based on a video encoding device model referred to as the HEVC Test Model (HM). The HM assumes several capabilities of video encoding devices through devices according to ITU-T H.264 / AVC, for example. For example, while H.264 provides nine intraprediction encoding modes, HM provides as many as thirty-five intraprediction encoding modes, based on the size of the block being encoded with intraprediction, for example.
[0046] The HM refers to a video data block as an encoding unit (CU). A CU can refer to a rectangular image region that functions as a basic unit to which several coding tools are applied for compression. In H.264, it can also be called a macroblock. Syntax data within a bit stream can define the largest encoding unit (LCU), which is the largest CU in terms of the number of pixels. In general, a CU serves a purpose similar to that of an H.264 macroblock, except that a CU does not have a size distinction. Thus, a CU can be partitioned, or “divided”, into sub-CUs.
[0047] An LCU can be associated with a quad-tree transformation data structure that indicates how the LCU is partitioned. In general, a quad-tree transformation data structure includes one node per CU of an LCU, where one root node corresponds to the LCU and other nodes correspond to sub-CUs of the LCU. If a given CU is divided into four sub-CUs, the node in the quad-tree transformation that corresponds to the divided CU includes four child nodes, each of which corresponds to one of the sub-CUs. Each node in the quad-tree transformation data structure can provide syntax information for the corresponding CU. For example, a node in the quad-tree transformation can include a split indicator for the CU, which indicates whether the CU that corresponds to the node is divided into four sub-CUs. The syntax information for a given CU can be defined recursively and may depend on whether the CU is divided into sub-CUs.
[0048] A CU that is not divided (that is, a CU that corresponds to a terminal, or “leaf” node in a given quad-tree transformation) can include one or more prediction units (PUs). In general, a PU represents all or a part of the corresponding CU and includes data to retrieve a reference sample for the PU for the purpose of performing prediction for the CU. For example, when CU is encoded intramodally, PU can include data that describes an intraprediction mode for PU. As another example, when the CU is encoded in an intermodal manner, the PU can include data that defines a motion vector for the PU. The data that defines the motion vector can describe, for example, a horizontal component of the motion vector, a vertical component of the motion vector, a resolution for the motion vector (precision of a quarter of a pixel or precision of an eighth of a pixel, for example), a frame of reference to which the motion vector points and / or a reference list (list 0 or list 1, for example) for the motion vector. The data for the CU that define the PU or PUs of the CU can also describe, for example, the partitioning of the CU into the PU or PUs. The partitioning modes can differ between whether the CU is uncoded, coded by the intraprediction mode or coded by the interpredition mode.
[0049] A CU that has one or more PUs can also include one or more transform units (TUs). Following the prediction for a CU that uses one or more PUs, as described above, a video encoder can calculate one or more residual blocks for the respective parts of the CU that correspond to the PU or PUs. Residual blocks can represent a pixel difference between the video data for the CU and the predicted data for the PU or PUs. A set of residual values can be transformed, scanned and quantized to define a set of quantized transform coefficients. A TU can define a partition data structure that indicates partition information for the transform coefficients that is substantially similar to the quad-tree transformation data structure described above with reference to a CU. A TU is not necessarily limited to the size of a PU. Thus, the TUs can be given higher or lower than the corresponding PUs for the same CU. In some examples, the maximum size of a TU may correspond to the size of the corresponding CU. In one example, residual samples that correspond to a CU can be subdivided into smaller units using a quad-tree transformation structure known as "residual quadtree transformation" (RQT). In this case, the leaf nodes of the RQT can be referred to as the TUs, for which the corresponding residual samples can be transformed and quantized.
[0050] Following intrapredictive or interpretive coding for the production of predictive data and residual data, and then any transforms (such as the 4x4 or 8x8 integer transform used in H.264 / AVC or a DCT discrete cosine transform) for the production of transform coefficients, the quantization of the transform coefficients can be performed. Quantization generally refers to a process in which the transform coefficients are quantized to possibly reduce the amount of data used to represent the coefficients. The quantization process can reduce the bit depth associated with some or all of the coefficients. For example, a value of n bits can be rounded to a value of m bits during quantization, where n is greater than m.
[0051] Following quantization, entropy coding of the quantized data (that is, quantized transform coefficients) can be performed. Entropy encoding can conform to the techniques of this description with respect to encoding the position information of the last significant coefficient for a block of video data before encoding the position information of significant coefficients for the block, and may also use other entropy coding techniques, such as context-adaptive variable length coding (CAVLC), CABAC, PIPE or other entropy coding methodology. For example, coefficient values, represented as magnitudes and corresponding signs ("+1" or "-1", for example) for the quantized transform coefficients, can be encoded using entropy coding techniques.
[0052] It should be noted that the prediction, transform and quantization described above can be performed for any block of video data, for example, for a PU and / or TU of a CU, or for a macroblock, depending on the pattern specified encoding code. Therefore, the techniques of this description, referring to the effective encoding of position information of the last significant coefficient based on scan order information for a block of video data can be applied to any block of video data, such as, for example, example, to any block of quantized transform coefficients, including a macroblock, or to a CU of a CU. In addition, a block of video data (a macroblock or CU of a CU, for example) can each include a luminance component (Y), a first chrominance component (U) and a second chrominance component ( V) of the corresponding video data. Therefore, the techniques of this description can be performed for each of the components Y, U and V of a given block of video data.
[0053] To encode blocks of video data as described above, information regarding the position of significant coefficients within a given block can also be generated and encoded. Then, the values of the significant coefficients can be encoded, as described above. In H.264 / AVC and the emerging HEVC standard, when using a context-adaptive entropy coding process, such as a CABAC process, the position of significant coefficients within a video data block can be encoded before coding the values of the significant coefficients. The process of encoding the position of all significant coefficients within the block can be referred to as signification map (SM) encoding. Figures 4A-4C, described in more detail below, are conceptual diagrams showing an example of a 4x4 block of quantized transform coefficients and corresponding SM data.
[0054] A typical SM coding procedure can be described as follows. For a given block of video data, an SM can be encoded only if there is at least one significant coefficient within the block. The presence of significant coefficients within a given block of video data can be indicated in an encoded block pattern (using the syntax element “encoded_block_standard” or CBP, for example), which is an encoded binary value for a set of blocks (such as luminance or chrominance blocks) associated with an area of pixels in the video data. Each bit in the CBP is referred to as a coded block indicator (which corresponds to the syntax element "coded_block_indicator", for example) and used to indicate whether there is at least one significant coefficient within its corresponding block. In other words, a coded block indicator is a one-bit symbol that indicates whether there are any significant coefficients within a single block of transform coefficients, and a CBP is a set of coded block indicators for a set of block blocks. related video data.
[0055] If a coded block indicator indicates that there are no significant coefficients present within the corresponding block (the indicator is equal to “0”, for example) no additional information can be coded for the block. However, if an encoded block indicator indicates that at least one significant coefficient exists within the corresponding block (the indicator is equal to "1", for example), an SM can be encoded for the block following a scan order of coefficients associated with the block. The scan order can define the order in which the meaning of each coefficient within the block is encoded as part of the SM encoding. In other words, the scan can serialize the two-dimensional block of coefficients into a one-dimensional representation to determine the meaning of the coefficients. Different scan orders (zigzag, horizontal and vertical, for example) can be used. Figures 5A-5C, also described in more detail below, show examples of some of the different scan orders that can be used for 8x8 blocks of video data. The techniques in this description, however, can also apply to a wide variety of other video orders. scans, including a diagonal scan order, scan orders that are combinations of zigzag, horizontal, vertical and / or diagonal scan orders, as well as scan orders that are partly zigzag, partly horizontal, partly vertical and / or partly diagonal. In addition, the techniques in this description may also consider a scan order that is itself adaptive, based on statistics associated with the previously coded video data block (blocks that have the same block size or encoding mode as the current block that is currently in use). being encoded, for example). For example, an adaptive scan order may be the scan order associated with the block, in some cases.
[0056] Given an encoded block indicator that indicates that at least one significant coefficient exists within a given block, and a scan order for the block, an SM for the block can be encoded as follows. The two-dimensional block of quantized transform coefficients can first be mapped in a one-dimensional array using the scan order. For each coefficient in the array, following the scanning order, a one-bit coefficient indicator can be encoded (which corresponds to the syntax element "significant_coef_exposure", for example). That is, each position in the array can be assigned a binary value, which can be set to “1” if the corresponding coefficient is significant and set to “0” if it is not significant (that is, zero). If a given significant coefficient indicator is equal to "1", indicating that the corresponding coefficient is significant, an indicator of the last significant coefficient of an additional bit can also be encoded (which corresponds to the syntax element "last_coef_significant", for example) , which can indicate whether the corresponding coefficient is the last significant coefficient within the array (that is, within the block given the scan order). Specifically, each indicator of the last significant coefficient can be set to “1” if the corresponding coefficient is the last significant coefficient within the array and set to “0” otherwise. If the last position in the array is reached in this way, and the SM coding process has not been terminated by an indicator of the last significant coefficient equal to "1", then it can be inferred that the last coefficient in the array (and therefore in the block given the scan order) is significant, and no indicator of the last significant coefficient can be coded for the last position in the array.
[0057] Figures 4B-4C are conceptual diagrams showing examples of sets of indicators of significant coefficients and indicators of last significant coefficients, respectively, which correspond to SM data for the block shown in Figure 4A, presented in map form, and not in form. arrangement. It should be noted that the indicators of significant coefficients and the indicators of last significant coefficients, as described above, can be set to different values (for example, an indicator of last significant coefficient can be set to "0" if the corresponding coefficient is significant and "1" if not significant, and an indicator of the last significant coefficient can be set to "0" if the corresponding coefficient is the last significant coefficient and to "1" if it is not the last significant coefficient) in other examples .
[0058] After the SM is encoded, as described above, the value of each significant coefficient (ie the magnitude and sign of each significant coefficient, indicated, for example, by the syntax elements "level_of_coef_menos1" and "indicator_of_sign_of_coef", respectively) in the block it can also be coded.
[0059] According to some techniques, a fixed scan order can be used to encode blocks of video data, as described above, such as, for example, the zigzag scan order. According to other techniques, several scan orders can be used to code the blocks. In some examples, "adaptive coefficient scanning" (ACS) can be used, in which the scanning order is adapted over time, and the currently adapted scanning order is used to encode a specific block of coefficients in any given data. In still other techniques, the video encoder 20 can test multiple scan orders based on one or more metrics of compression efficiency and select the best scan order to encode the blocks. indicate the scan order for the video decoder 30 encoding an ACS index, which can represent any of several scan orders (using indexes 0 for the zigzag scan order, 1 for the horizontal scan order and 2 for the vertical scan order, for example).
[0060] According to some techniques, video encoder 20 can encode the ACS index only when the last significant coefficient is not located at the first position in the scan order (which corresponds to the top left position within the block, commonly referred to as the " DC ”). The video encoder 20 can encode the ACS index in this way because the video decoder 30 does not need an indication of the scan order used by the video encoder 20 in the case of the last (and only) significant coefficient within the block is located in the DC position, since all possible scan orders can start with the DC position, as shown in Figures 5 and 6, also described in more detail below.
[0061] In the event that the last significant coefficient within the block is not located in the DC position, the video encoder 20 can encode the ACS index as follows. Video encoder 20 can encode a first signal (“bin1”, for example), which indicates whether the scan order is the zigzag scan order (bin1 = “0”, for example) or not (bin1 = “1 ", for example). If the scan order is not the zigzag scan order, video encoder 20 can encode a second signal (“bin2”, for example), which indicates whether the scan order is the horizontal scan order (bin2 = “0”, for example), or the vertical scan order (bina2 = “1”, for example). In the same way, the video decoder 30 can receive and decode the first signal and the second signal to determine the ACS index. Therefore, instead of always encoding the ACS index, the video encoder 20 and / or the video decoder 30 can encode the ACS index only when the last significant coefficient is not located in the DC position.
[0062] As previously described, according to the techniques of this description, the position information of the last significant coefficient for a specific block of video data can be encoded using x and y coordinates that indicate a position of the last significant coefficient within the block according to the order scan associated with the block. In some examples, the x coordinate can correspond to a column number of the position within the block, and the y coordinate can correspond to a line number of the position within the block. For example, the row and column numbers can be relative to the row and column numbers that correspond to a reference position, or "origin", within the block, such as, for example, the DC position. with these techniques, the position information of the last significant coefficient for a video data block may not be encoded using SM encoding, as described above, but instead the explicit encoding of the x and y coordinates of the position of the last significant coefficient within the block according to the scan order associated with the block. According to such techniques, the x and y coordinates can be coded independently of the remaining SM data (ie significant coefficient indicators or position information of significant coefficients) for the block. , the x and y coordinates can be encoded before encoding the position information of significant coefficients for the block.
[0063] In some examples compatible with the techniques of this description, to encode the x and y coordinates, the video encoder 20 and / or the video decoder 30 can also determine statistics that indicate the probability that a given position within the block corresponds to the position of the last coefficient within the block according to the scan order. In particular, statistics can indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, to understand a given value (such as, for example, "0", "1", "2", etc.). In other words, the statistics can indicate the probability that each of the x and y coordinates described above will comprise a given value. The video encoder 20 and / or the video decoder 30 can determine the statistics and encode the x and y coordinates based on the statistics, using context-adaptive entropy coding, for example In some instances, the video encoder 20 and / or the video decoder 30 can determine the statistics using position information from the last significant coefficient for previously encoded video data blocks, such as x and y coordinate values for the previous encoded blocks orally. In other examples, the video encoder 20 and / or the video decoder 30 can update the statistics based on the x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. As previously described, statistics can vary depending on the scan order that is used to code the block.
[0064] As an example compatible with the techniques of this description, to encode the x and y coordinates based on the statistics, the video encoder 20 and / or the video decoder 30 can perform a context-adaptive entropy encoding process (a CABAC process, for example) example), which includes applying a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates and the scan order. In this example, video encoder 20 and / or video decoder 30 can use the scan order to select the specific context model that includes the statistics. That is, the video encoder 20 and / or the video decoder 30 can select unique statistics to encode the x and y coordinates when using a specific scan order to encode the block.
[0065] In addition, in cases where a coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the video encoder 20 and / or the video decoder 30 can encode the coordinate using the value of the other coordinate, previously coded, as a context. That is, the value of a previously coded coordinate of the x and y coordinates can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, to understand a given value. The video encoder 20 and / or the video decoder 30 can then use the selected statistics to encode the x and y coordinates by performing context-adaptive entropy coding.
[0066] As another example compatible with the techniques of this description, the x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more bits, or “binary.” In other words, the x and y coordinates can be "binarized". Thus, to encode the x and y coordinates based on the statistics, the video encoder 20 and / or the video decoder 30 can encode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding . In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include probability estimates for each codeword binary, depending on the position of the respective binary within the codeword. In some examples, the video encoder 20 and / or the video decoder 30 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, code word binaries that match the x and y coordinates for the previously coded blocks, such as, for example, as part of determining the statistics based on the position information of the last significant coefficient for the previously coded blocks, as described above. In other examples, the video encoder 20 and / or the video decoder 30 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates, as well described earlier. The video encoder 20 and / or the video decoder 30 can use the probability estimates to encode each binary by performing context-adaptive entropy coding.
[0067] A disadvantage of the techniques described above is that when encoding position information of the last significant coefficient for a block of video data, the video encoder 20 and / or the video decoder 30 may use different statistics depending on the scan order used by the video encoder 20 and / or video decoder 30 to encode the block. In other words, video encoder 20 and / or video decoder 30 can each determine and maintain (update, for example) a series of statistical sets to encode position information for the last significant coefficient for data blocks when a series of scan orders is used to encode the blocks. In some cases, the sets of statistics determined and maintained for scan orders that are symmetric to each other may include the same information, or similar information, as previously described. In these cases, the determination and maintenance of the statistical sets can result in the inefficient use of resources of the coding system and in the unnecessary complexity of the coding system.
[0068] Another disadvantage of the techniques described above is that when video encoder 20 and / or video decoder 30 encode position information of the last significant coefficient for blocks of video data using common statistics, regardless of the scan orders used to encode the blocks, the statistics may not be as accurate as the statistics that are individually determined and maintained (updated, for example) for each scan order. That is, common statistics can indicate probabilities of positions within a given block of video data that correspond to the position of the last significant coefficient within the block according to a scan order associated with the block less accurately than individually determined statistics. and maintained for the scan order used to code the block. In these cases, encoding the position information of the last significant coefficient with the use of common statistics can result in reduced coding efficiency.
[0069] Yet another disadvantage of the techniques described above is that, in some cases, the video encoder 20 and / or the video decoder 30 can encode a block of video data using one of a series of scan orders that originate at one position common within the block, such as the DC position. In these cases, when a position of the last significant coefficient within the block according to a scan order associated with the block corresponds to the common position, there are no other significant coefficients within the block besides the coefficient located in the common position. Therefore, video encoder 20 and / or video decoder 30 need not encode a position of the last significant coefficient within the block. In other words, the encoding of the position information of the last significant coefficient for the block in its entirety, represented, for example, using x and y coordinates, as previously described, may not be necessary in this case, since this may, again , result in reduced coding efficiency.
[0070] In addition, in the example above, when a position of the last significant coefficient within the block does not correspond to the common position, and the position information of the last significant coefficient for the block must be encoded in its entirety, the information may, in some cases, be coded using statistics that are not accurate, such as, for example, statistics that do not take advantage of the correlation described above between the position information of the last significant coefficient and the scan order information for the block, which can, again , result in reduced coding efficiency.
[0071] Therefore, this description describes techniques that can enable the positioning information of the last significant coefficient for a block of video data to be more efficiently compared to other techniques and the use of encoding systems that are less complex with respect to other systems. As an example, the position information of the last significant coefficient can be encoded using encoding systems that are less complex compared to other systems by encoding the information using common statistics when one of a series of scan orders is used to encode the block, for example, by encoding x and y coordinates and interchanged x and y coordinates that indicate the information, depending on the scan order used to encode the block. According to this example, the position information of the last significant coefficient can also be more effectively encoded by updating the common statistics based on the x and y coordinates and the exchanged x and y coordinates, which can result in greater accuracy of the statistics. As another example, the position information of the last significant coefficient can be encoded more effectively by encoding the information incrementally, to the extent necessary, and when encoding the information in its entirety, doing so based on the scan order , using the scan order as a context, for example.
[0072] In some examples, the video encoder 20 of the source device 12 can be configured to encode certain blocks of video data (one or more macroblocks or TUs of a CU, for example), and the video decoder 30 of the destination device 14 can be configured to receive encoded video data from video encoder 20, such as, for example, from modem 28 and receiver 26. According to the techniques of this description, for example, video encoder 20 and / or the video decoder 30 can be configured to encode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block when the scan order comprises a first scan order. The video encoder 20 and / or the video decoder 30 can also be configured to encode interchanged x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order when the scan order comprises a second order of scanning. scan. For example, the second scan order may be different from the first scan order.
[0073] In this example, the first scan order and the second scan order can be symmetrical with respect to each other (or at least partially symmetrical). For example, the first scan order can be a horizontal scan order and the second scan order can be a vertical scan order, where the horizontal scan order and the vertical scan order originate from a common position within the block. For example, the common position can be the DC position, as described earlier.
[0074] In this example, to encode the x and y coordinates and the interchanged x and y coordinates, the video encoder 20 and / or the video decoder 30 can also be configured to determine statistics that indicate the probability that each of the x and y coordinates will comprise a given value, where the encoding of the x and y coordinates of the exchanged x and y coordinates comprises an encoding based on the statistics. For example, the probability of the x coordinate comprising a given value can be used to encode the permuted x coordinate and y coordinate and the probability of the y coordinate comprising a given value can be used to encode the exchanged y coordinate and x coordinate. The video encoder 20 and / or the video decoder 30 can also be configured to update the statistics based on the x and y coordinates and the exchanged x and y coordinates. For example, the probability of the x coordinate comprising a given value can be updated using the x coordinate and the permuted y coordinate, and the probability of the y coordinate comprising a given value can be updated using the y coordinate and the exchanged x coordinate.
[0075] As an example, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the video encoder 20 and / or the video decoder 30 can be configured to perform a context-adaptive entropy encoding process (a CABAC process, for example). example), which includes the application by the video encoder 20 and / or the video decoder 30 of a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates, the swapped x and y coordinates and the scan order.
[0076] It should be noted that, in some examples, the video encoder 20 and / or the video decoder 30 can also be configured to encode the x and y coordinates when the scan order comprises a third scan order. For example, the third scan order may be different from the first scan order and the second scan order. As an example, the third scan order can be a zigzag scan order or a diagonal scan order, where the zigzag or diagonal scan order also originates from the common position within the block, such as the DC position, for example .
[0077] In this example, in some cases the video encoder 20 and / or the video decoder 30 can also be configured to encode information that identifies the scan order, that is, the scan order information for the block. In addition, in some cases the video encoder 20 and / or the video decoder 30 can also be configured to encode information that identifies the positions of other significant coefficients within the block, that is, the position information of significant coefficients for the block.
[0078] As another example, video encoder 20 and / or video decoder 30 can be configured to encode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block. For example, the scan order can be one of a series of scan orders, where each of the scan order series originates from a common position within the block, such as the DC position, for example.
[0079] In this example, to encode the x and y coordinates, the video encoder 20 and / or the video decoder 30 can be configured to encode information indicating whether the x coordinate corresponds to the common position, encode information indicating whether the y coordinate corresponds to the position common and, if the x coordinate does not correspond to the common position and the y coordinate does not correspond to the common position, encode information that identifies the scan order. The video encoder 20 and / or the video decoder 30 can also be configured to, in case the x coordinate does not correspond to the common position, encode the x coordinate based on the scan order and, in the case of the y coordinate does not correspond to the common position, encode the y coordinate based on the scan order.
[0080] In this example, to encode the x coordinate and the y coordinate based on the scan order, the video encoder 20 and / or the video decoder 30 can be configured to perform a context-adaptive entropy encoding process (a CABAC process, for example) example), which includes the application, by the video encoder 20 and / or the video decoder 30, of a context model based on at least one context. For example, the at least one context can include the scan order.
[0081] In any case, after encoding the position information of the last significant coefficient and, in some cases, the scan order information and the position information of significant coefficients, that is, the SM data for the block in the manner described above, the video encoder 20 and / or the video decoder 30 can also encode the value of each significant coefficient (such as, for example, the magnitude and the sign of each significant coefficient, indicated by the syntax elements "level_of_of_coef_menos1" and "indicator_of_sign_of_coef", respectively) within the block.
[0082] For example, the techniques of this description may allow the video encoder 20 and / or the video decoder 30 to encode the position information of the last significant coefficient for the block more effectively than when using other methods and may allow the video encoder 20 and / or the video decoder 30 are less complex compared to other systems. In this way, there can be a relative bit saving for the bit stream that includes the position information of the last significant coefficient and a relative reduction in complexity for the video encoder 20 and / or the video decoder 30 used to encode the information. , when using the techniques of this description.
[0083] The video encoder 20 and video decoder 30 can each be implemented as any one of several encoder or decoder circuits, as applicable, such as one or more microprocessors, digital signal processors (DSPs), integrated circuit specific application (ASICs), array of field programmable ports (FPGAs), discrete logic circuits, software, hardware, firmware or any combination of them. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, both of which can be integrated as part of a combined encoder / decoder (CODEC). An apparatus including the video encoder 20 and / or the video decoder 30 may comprise an integrated circuit, a microprocessor and / or a wireless communication device, such as a cell phone.
[0084] Figure 2 is a block diagram showing an example of a video encoder 20 that can implement techniques to effectively encode position information of the last significant coefficient based on scan order information for a compatible video data block. the techniques of this description. The video encoder 20 can perform intra and intercoding blocks within video frames, including macroblocks, or CUs, or their partitions or subpartitions. Intracoding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame. Intercoding relies on time prediction to reduce or remove time redundancy in video within adjacent frames of a video sequence. The intramode (mode I) can refer to any of several spatial compression modes, and intermodes, such as the unidirectional prediction mode (P mode) or the bidirectional prediction mode (B mode), can refer to any of several modes of temporal compression.
[0085] As shown in Figure 2, video encoder 20 receives the current block of video data within a video frame to be encoded. In the example in Figure 2, the video encoder 20 includes a motion compensation unit 44, a motion estimation unit 42, a memory 64, an adder 50, a transform module 52, a quantization unit 54 and a unit entropy coding system 56. For video block reconstruction, the video encoder 20 also includes an inverse quantization unit 58, an inverse transform module 60 and an adder 62. An unblocking filter (not shown in Figure 2) can also be included to filter boundaries between blocks to remove blocking artifacts from the reconstructed video. If desired, the deblocking filter would typically filter the output of adder 62.
[0086] During the encoding process, the video encoder 20 receives a video frame or slice to be encoded (a). The frame or slice can be divided into several blocks of video. The motion estimation unit 42 and the motion compensation unit 44 can perform interpretive encoding of a given received video block with respect to one or more blocks in one or more reference frames, to obtain temporal compression. The intrapredictive module 46 can perform intrapredictive coding of a given video block received with respect to one or more neighboring blocks in the same frame or slice of the block to be encoded, to obtain spatial compression.
[0087] The mode selection unit 40 can select one of the encoding modes, that is, a mode or several modes of intra or intercoding, based on encoding results (resulting encoding rate and level of distortion, for example), and with based on the type of frame or slice for the frame or slice, including the given received block being encoded, and supply the intra or intercoded block to adder 50 to generate residual block data and to adder 62 to reconstruct the encoded block for use in a frame of reference or slice of reference. In general, intraprediction involves predicting the current block with respect to neighboring blocks, previously coded, while interpredition involves motion estimation and motion compensation for the temporal prediction of the current block.
[0088] The motion estimation unit 42 and the motion compensation unit 44 represent the interpretive elements of the video encoder 20. The motion estimation unit 42 and the motion compensation unit 44 can be highly integrated, but are shown separately for conceptual purposes. Motion estimation is the process of generating motion vectors, which estimate the motion for the video blocks. A motion vector, for example, can indicate the displacement of a predictive block within a predictive frame of reference (or another coding unit). A predictive block is a block that is considered to correspond closely to the block to be coded, in terms of pixel difference, and that can be determined by the sum of the absolute difference (SAD), by the sum of the difference squared (SSD) or by other different metrics. A motion vector can also indicate the displacement of a partition in a block. Motion compensation may involve the search or generation of the predictive block based on the motion vector determined by the motion estimation. Again, the motion estimation unit 42 and the motion compensation unit 44 can be functionally integrated, in some examples.
[0089] The motion estimation unit 42 can calculate a motion vector for a video block of an intercoded frame by comparing the video block with the video blocks of a reference frame in memory 64. The motion compensation unit 44 can also interpolate integer sub-pixels of the reference frame, such as an I-frame or a P-frame, for the purposes of this comparison. The ITU H.264 standard, as an example, describes two lists: list 0, which includes reference frames that have a display order prior to the current frame being encoded, and list 1, which includes reference frames that have a display order later than that of the current frame being encoded. Therefore, the data stored in memory 64 can be organized according to these lists.
[0090] The motion estimation unit 42 can compare blocks of one or more reference frames from memory 64 with a block to be encoded from the current frame, such as, for example, a P frame or a B frame. When the reference frames in memory 64 include values for integer sub-pixels, the motion vector calculated by motion estimation unit 42 can refer to the location of the integer sub-pixel of a frame of reference. The motion estimation unit 42 and / or the motion compensation unit 44 can also be configured to calculate values for integer sub-pixel positions of reference frames stored in memory 64 if no value for the sub-pixel positions. integer pixel is stored in memory 64. Motion estimation unit 42 can send the motion vector to entropy encoder unit 56 and motion compensation unit 44. The frame block of reference identified by a vector of movement can be referred to as an interpretive block or, more generally, a predictive block. The motion compensation unit 44 can calculate prediction data based on the predictive block.
[0091] The intraprediction module 46 can intrapredict the current block, as an alternative to the interpretation performed by the motion estimation unit 42 and the motion compensation unit 44, as described above. In particular, the intraprediction module 46 can determine the intraprediction mode to be used to encode the current block. In some examples, intrapredicting module 46 can encode the current block using various intrapredicting modes, such as during separate coding passages, and intrapredicting module 46 (or mode selection unit 40, in some examples) you can select an appropriate intraprediction mode to be used from the tested modes. For example, intraprediction module 46 can calculate rate distortion values using rate distortion analysis for the various tested intraprediction modes and select the intraprediction mode that has the best rate distortion characteristics among the tested modes. Rate distortion analysis generally determines the degree of distortion (or error) between an encoded block and an original, uncoded block that has been encoded to produce the encoded block, as well as the bit rate (that is, the number of bits) used to produce the coded block. The intraprediction module 46 can calculate the ratios of the distortions and rates for the various coded blocks to determine the intraprediction mode that has the best distortion rate value for the block.
[0092] After predicting the current block, using, for example, intraprediction or interpretation, the video encoder 20 can form a residual video block by subtracting the prediction data calculated by the motion compensation unit 44 or by the intraprediction module 46 of the block of video. original video being encoded. The adder 50 represents the component or components that can perform this subtraction operation. Transform module 52 can apply a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising residual transform coefficient values. Transform module 52 can perform other transforms, such as those defined by the H.264 standard, which are conceptually similar to DCT. Wavelet transforms, integer transforms, subband transforms or other types of transforms can also be used. In any event, transform module 52 can apply the transform to the residual block, producing a block of residual transform coefficients. The transform can convert residual information from the pixel domain into the transform domain, such as the frequency domain. The quantization unit 54 can quantize the residual transform coefficients to further reduce the bit rate. The quantization process can reduce the bit depth associated with some or all of the coefficients. The degree of quantization can be modified by adjusting a quantization parameter.
[0093] Following quantization, the entropy coding unit 56 may entropy the quantized transform coefficients, which may include CAVLC, CABAC, PIPE or other entropy coding technique, by entropy. Following entropy coding by entropy coding unit 56, the encoded video can be transmitted to another device or archived for later transmission or retrieval.
[0094] In some cases, the entropy coding unit 56 or another video encoder unit 20 can be configured to perform other encoding functions, in addition to entropy encoding the quantized transform coefficients as described above. For example, the entropy coding unit 56 can construct header information for the block (macroblock, CU or LCU, for example), or video frame that contains the block, with appropriate syntax elements for transmission in the data bit stream. encoded video. According to some coding standards, such syntax elements may include position information of the last significant coefficient for the block (such as, for example, a macroblock or CU CU), as described above. Also as previously described, the position information of the last significant coefficient can consume a high percentage of the bit rate of the total compressed video if ineffectively encoded. Therefore, this description describes techniques that can allow the coding of the position information of the last significant coefficient for the block more effectively than when using other methods. In addition, this description describes the use of coding systems that are less complex compared to other systems when encoding the position information of the last significant coefficient for the block.
[0095] In some examples, the entropy encoding unit 56 of the video encoder 20 can be configured to encode certain blocks of video data (one or more macroblocks or CU's TUs, for example). In accordance with the techniques of this description, for example, the entropy coding unit 56 can be configured to encode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to an associated scan order. to the block when the scan order comprises a first scan order. The entropy coding unit 56 can also be configured to encode interchanged x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order when the scan order comprises a second scan order. For example, the second scan order may be different from the first scan order.
[0096] In this example, the first scan order and the second scan order can be symmetrical with respect to each other (or at least partially symmetrical). For example, the first scan order can be a horizontal scan order and the second scan order can be a vertical scan order, where the horizontal scan order and the vertical scan order originate from a common position within the block, such as the DC position, for example.
[0097] Specifically, the first scan order and the second scan order can each be a scan order that can be used by entropy coding unit 56 to encode the block. For example, the first and second scan orders can be scan orders used by video encoder 20 to encode blocks of video data and by video decoder 30 to decode blocks within the corresponding encoding system 10 comprising the encoder video 20 and video decoder 30. In some examples, the first and second scan orders may be just a few of the scan orders used within system 10 to encode the blocks. In other examples, the first and second scan orders may be the only scan orders used within system 10 to code the blocks.
[0098] In addition, the exchanged x and y coordinates also correspond to the position information of the last significant coefficient for the block, but are also processed, that is, exchanged, by the entropy coding unit 56 to allow the information to be coded more effectively than when other techniques are used, as previously described. Specifically, the permuted x and y coordinates can allow the use of common statistics to encode the x and y coordinates and the exchanged x and y coordinates that indicate the position information of the last significant coefficient for the block, as also described previously.
[0099] In this example, to encode the x and y coordinates and the interchanged x and y coordinates, the entropy coding unit 56 can also be configured to determine statistics that indicate the probability that each of the x and y coordinates will comprise a given value, where the encoding of the x and y coordinates of the exchanged x and y coordinates comprise coding based on statistics. For example, the probability of the x coordinate comprising a given value can be used to encode the x coordinate and the permuted y coordinate, and the probability of the y coordinate comprising a given value can be used to encode the exchanged y coordinate and x coordinate.
[0100] Generally, statistics can indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order. In particular, statistics can indicate the probability of a coordinate, such as the x coordinate or the y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order it comprises the first scan order, to understand a given value (such as, for example, “0”, “1”, “2”, etc.).
[0101] As previously described, since the first and second scan orders can be symmetric with respect to each other (or at least partially symmetric), the probability of the x-coordinate will comprise a given value when the scan order comprises the first scan order it can be identical or similar to the probability of the y coordinate comprising the same value when the scan order comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the x and y coordinates exchanged, respectively, when the scan order comprises the second scan order. Thus, statistics can also indicate the probability that each of the x and y coordinates will comprise a given value. In some examples, the entropy coding unit 56 can determine the statistics using the position information of the last significant coefficient for previously encoded video data blocks, such as x and y coordinate values and the x and y coordinates exchanged for the encoded blocks previously.
[0102] The entropy coding unit 56 can also be configured to update the statistics based on the x and y coordinates and the permuted x and y coordinates, such that the probability of the x coordinate comprising a given value is updated using the x coordinate and the permuted y coordinate, and the probability of y-coordinate comprising a given value is updated using the y-coordinate and the x-coordinate exchanged. For example, the updated statistics can be used to encode position information of the last significant coefficient for blocks of video data subsequently encoded in the manner described above.
[0103] As an example, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy coding unit 56 can be configured to perform a context-adaptive entropy coding process (a CABAC process, for example), which includes the application, by the entropy coding unit 56, of a context model that includes statistics based on at least one context. For example, the at least one context can include one of the coordinates, the exchanged x and y coordinates and the scan order. As previously mentioned, in addition to CABAC, the techniques described for exchanging the x and y coordinates for coding purposes can also be used in other coding techniques by context-adaptive entropy, such as CAVLC, PIPE and other coding techniques adaptive to the context.
[0104] In this example, the entropy coding unit 56 can use the scan order, such as the horizontal or vertical scan order, to select the specific context model that includes the statistics. That is, the entropy coding unit 56 can select the same statistics to encode the x and y coordinates when using the first scan order to encode the block, and to encode the exchanged x and y coordinates when using the second scan order to encode the block. . Furthermore, in cases where one coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the entropy coding unit 56 can encode the coordinate using the value of the other coordinate, coded earlier, as context. That is, the value of a previously coded coordinate of the x and y coordinates or of the x and y interchanged coordinates, depending on the scan order used to code the block, can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, understand a given value. The entropy coding unit 56 can then use the selected statistics to encode the x and y coordinates and the x and y coordinates exchanged by performing context-adaptive entropy coding.
[0105] As also described previously, in this example the x and y coordinates and the exchanged x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries, that is, "binarized". Thus, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy coding unit 56 can encode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, may include probability estimates that indicate the probability of each binary code word corresponding to the coordinate comprising a given value (“0” or “1”, for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word. In some examples, the entropy coding unit 56 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, code word binaries that correspond to x and y coordinates and interchanged x and y coordinates for previously coded blocks, such as, for example, as part of determining statistics based on the position information of the last significant coefficient for previously coded blocks, as previously described. In other examples, the entropy coding unit 56 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates and the exchanged x and y coordinates, as also described previously. . The entropy coding unit 56 can use the probability estimates to encode each binary by performing context-adaptive entropy coding.
[0106] As another example, in some cases values other than a binary of a unified codeword for one coordinate (x, for example) may result in different probability estimates for a corresponding binary of a unary codeword for the other coordinate (y , for example). Therefore, when encoding a binary from a unary codeword to a coordinate using probability estimates that match the binary, as described above, using probability estimates that include information about the value of a binary, such as, for example , a corresponding binary, from one unary code word to the other coordinate can result in the accuracy of probability estimates and, therefore, can allow for effective coding. For example, the binary of the unary code word for the other coordinate can be a binary that corresponds to the binary of the unary code word for one coordinate, as, for example, the binaries can be located in the same binary positions, or positions of similar binary code, within their respective code words. The encoding of the x and y coordinates of the x and y coordinates (or x and y coordinates that indicate position information of the last significant coefficient for a block of video data, in general) in this "interleaved" way, using previously encoded binaries as contexts, it can allow the use of mutual information of the respective x and y coordinates, which can allow more effective coding of the coordinates.
[0107] In other examples, the entropy coding unit 56 can be configured to encode the x and y coordinates and the x and y coordinates interchanged interchangeably, generally. In some examples, the entropy coding unit 56 may be configured to encode individual binaries of the codeword for the respective x and y coordinates in an interspersed manner. In other examples, the entropy coding unit 56 can be configured to encode binary groups of the codewords in an interspersed manner. For example, some code word binaries for each of the x and y coordinates can be encoded using a first coding mode (a regular coding mode, for example), while the remaining codeword binaries can be encoded using a second encoding mode (a bypass encoder mode, for example). Thus, the entropy coding unit 56 can be configured to encode one or more binaries of the codeword that corresponds to one of the coded coordinates using the first coding mode (regular, for example) before encoding one or more binaries of the codeword that corresponds to the other coded coordinate using the first coding mode, followed by the coding of one or more binaries that correspond to a coded coordinate using the second coding mode (by bypass, for example) before coding one or more binaries of the codeword that corresponds to the other coordinate encoded using the second encoding mode. In other examples, the entropy coding unit 56 can be configured to encode the binaries of the coded words using the second coding mode together.
[0108] Therefore, separating the encoding of the codeways' binaries for each of the x and y coordinates in the manner described above can allow the grouping of encoded binaries using a specific encoding mode (the bypass mode, for example) together, the which can improve the performance of the coding.
[0109] In other words, in cases where each of the x and y coordinates of the permuted x and y coordinates comprises a sequence of one or more binaries, the entropy coding unit 56 can be configured to encode the x and y coordinates and the x and y coordinates exchanged by performing coding by adaptive entropy to the context, which includes applying the context model that includes the statistics based on one of the x and y coordinates in the exchanged x and y coordinates. The entropy coding unit 56 can be configured to encode the respective x and y coordinates by coding at least one binary of the sequence that corresponds to one of the coordinates for selecting the statistics from the context model based, at least in part, on the value of at least one binary in the sequence that corresponds to the other coordinate. In addition, the entropy coding unit 56 can be configured to encode the one or more binaries of the sequence that correspond to one of the coordinates and the one or more binaries of the sequence that correspond to the other coordinate in an interspersed manner.
[0110] Therefore, to encode the position information of the last significant coefficient, the entropy coding unit 56 can be configured to encode the x and y coordinates and the interchanged x and y coordinates, interchangeably, using previously encoded binaries. That is, the entropy coding unit 56 can be configured to encode each binary of a unary codeword for a given coordinate by executing a context-adaptive entropy coding process that includes applying a context model based on at least least one context, where the at least one context can include the position of the binary within the unary code word, as described above, and in the value of one or more previously encoded binaries from one unary code word to the other coordinate. In addition, the entropy coding unit 56 can be configured to encode the x and y coordinates and the interchanged x and y coordinates in general, interchangeably.
[0111] It should be noted that, in other examples compatible with the techniques of this description, other types of code words can be used, such as, for example, truncated unified code words, exponential Golomb code words, concatenated code words, as well as combinations of different coding techniques.
[0112] It should also be noted that, in some examples, the entropy coding unit 56 can also be configured to encode the x and y coordinates when the scan order comprises a third scan order. For example, the third scan order may be different from the first scan order and the second scan order. As an example, the third scan order can be a zigzag scan order or a diagonal scan order, where the zigzag or diagonal scan order also originates from the common position within the block, such as the DC position, for example .
[0113] In this example, the entropy coding unit 56 can also be configured to encode information that identifies the scan order, that is, the scan order information for the block. Alternatively, as previously described, entropy coding unit 56 may omit encoding scan order information for the block when entropy coding unit 56 uses an adaptive scan order to encode the block. In addition, in some cases the entropy coding unit 56 can also be configured to encode information that identifies the positions of all other significant coefficients within the block, that is, the position information of significant coefficients for the block.
[0114] For example, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators, as previously described. As also previously described, the position information of significant coefficients can be encoded by coding each significant coefficient indicator of the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0115] The context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy coding unit 56 can determine the probability estimates using the values of significant coefficient indicators for previously encoded video data blocks. In other examples, the entropy coding unit 56 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will comprise a given value. For example, updated probability estimates can be used to encode position information of significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0116] As another example, the entropy coding unit 56 can be configured to encode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block. For example, the scan order can be one of a series of scan orders, where each of the scan order series originates from a common position within the block, such as the DC position, for example.
[0117] In this example, to encode the x and y coordinates, the entropy coding unit 56 can be configured to encode information that indicates whether the x coordinate corresponds to the common position, encode information that indicates whether the y coordinate corresponds to the common position and, in the case of x coordinate does not correspond to the common position and the y coordinate does not correspond to the common position, encode information that identifies the scan order. The entropy coding unit 56 can also be configured to, in case the x coordinate does not correspond to the common position, encode the x coordinate based on the scan order and, in the case of the y coordinate does not correspond to the common position, encode the coordinate y based on the scan order.
[0118] In this example, to encode the x coordinate and the y coordinate based on the scan order, the entropy coding unit 56 can be configured to perform a context-adaptive entropy coding process (a CABAC process, for example), which includes the application by the entropy coding unit 56 of a context model based on at least one context. For example, the at least one context can include the scan order.
[0119] In addition, as an example, the entropy coding unit 56 can be configured to encode one coordinate (the y coordinate, for example) after another coordinate (the x coordinate, for example), where the entropy coding unit 56 can be configured to encode one coordinate using the value of the other previously encoded coordinate as a context. As another example, where each of the x and y coordinates comprises a sequence of one or more binaries, the entropy coding unit 56 can be configured to encode at least one binary of the sequence that corresponds to one of the coordinates by selecting model statistics of context based, at least in part, on the value of at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy coding unit 56 can be configured to encode the one or more binaries of the sequence that correspond to one of the coordinates and the one or more binaries of the sequence that correspond to the other coordinate in an interspersed manner.
[0120] In any case, after encoding the position information of the last significant coefficient and, in some cases, the scan order information and the position information of significant coefficients, that is, the SM data, for the block as described above , entropy coding unit 56 can also encode the value of each significant coefficient (such as, for example, the magnitude and sign of each significant coefficient, indicated by the syntax elements "level_of_the_coef_menos1" and "indicator_of_sign_of_coef", respectively) within the block.
[0121] Therefore, the techniques of this description may allow the entropy coding unit 56 to encode the position information of the last significant coefficient for the block more effectively than when using other methods and may allow the entropy coding unit. 56 has less complexity compared to other systems. In this way, there can be a relative bit saving for the encoded bit stream that includes the position information of the last significant coefficient and a relative reduction in complexity for the entropy coding unit 56 used to encode the information, when using the techniques of this description.
[0122] The inverse quantization unit 58 and the inverse transform module 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain, for example, for later use as a reference block. The motion compensation unit 44 can calculate a reference block by adding the residual block to a predictive block in one of the memory frames 64. The movement compensation unit 44 can also apply one or more interpolation filters to the reconstructed residual block for calculate integer sub-pixel values for use in motion estimation. Adder 62 adds the reconstructed residual block to the motion compensated prediction block produced by the motion compensation unit 44 to produce a reconstructed video block for storage in memory 64. The reconstructed video block can be used by the motion estimation unit 42 and the motion compensation unit 44 as a reference block for intercoding a block in a subsequent video frame.
[0123] In this way, video encoder 20 represents an example of video encoder configured to encode x and y coordinates that indicate a position of the last non-zero coefficient within a video data block according to a scan order associated with the block when the order The scan order comprises a first scan order, and to code interchanged x and y coordinates that indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second order scan order is different from the first scan order.
[0124] Figure 3 is a block diagram showing an example of a video decoder 30 that can implement techniques for effectively decoding position information from the last significant coded coefficient based on scan order information for a compatible video data block. with the techniques of this description. In the example in Figure 3, the video decoder 30 includes an entropy decoding unit 70, a motion compensation unit 72, an intraprediction module 74, a reverse quantization unit 76, a reverse transform unit 78, a memory 82 and an adder 80. The video decoder 30 may, in some examples, perform a decoding pass generally corresponding to the encoding pass described with respect to the video encoder 20 (Figure 2). The motion compensation unit 72 can generate prediction data based on motion vectors received from the entropy decoding unit 70.
[0125] In some examples, video decoder 30 can be configured to receive encoded video data (one or more macroblocks or CU's, for example, TU) from video encoder 20. According to the techniques of this description, as an example, the entropy decoding unit 70 can be configured to decode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block when the scan order comprises a first order of scan scan. The entropy decoding unit 70 can also be configured to decode swapped x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order when the scan order comprises a second scan order. For example, the second scan order may be different from the first scan order.
[0126] In this example, the first scan order and the second scan order can be symmetrical with respect to each other (or at least partially symmetrical). For example, the first scan order can be a horizontal scan order and the second scan order can be a vertical scan order, where the horizontal scan order and the vertical scan order originate from a common position within the block, such as the DC position.
[0127] Specifically, the first scan order and the second scan order can each be a scan order that can be used by entropy coding unit 56 to encode the block. For example, the first and second scan orders can be scan orders used by video encoder 20 to encode blocks of video data and by video decoder 30 to decode blocks within the corresponding encoding system 10 comprising the encoder video 20 and video decoder 30. In some examples, the first and second scan orders may be just a few of the scan orders used within system 10 to encode the blocks. In other examples, the first and second scan orders may be the only scan orders used within system 10 to code the blocks.
[0128] In addition, the exchanged x and y coordinates also correspond to the position information of the last significant coefficient for the block, but are also processed, that is, exchanged, by the entropy decoding unit 70 to allow the information to be decoded more effectively than when other techniques are used, as previously described. Specifically, the permuted x and y coordinates can allow the use of common statistics to decode the exchanged x and y coordinates and the exchanged x and y coordinates that indicate the position information of the last significant coefficient for the block, as also described previously.
[0129] In this example, to decode the x and y coordinates and the interchanged x and y coordinates, the entropy decoding unit 70 can also be configured to determine statistics that indicate the probability that each of the x and y coordinates will comprise a given value, in which the decoding of the x and y coordinates of the exchanged x and y coordinates comprise a decoding based on statistics. For example, the probability of the x coordinate comprising a given value can be used to decode the x coordinate and the permuted y coordinate, and the probability of the y coordinate comprising a given value can be used to decode the exchanged y coordinate and the x coordinate.
[0130] Generally, statistics can indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order. In particular, statistics can indicate the probability of a coordinate, such as the x coordinate or the y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order it comprises the first scan order, to understand a given value (such as, for example, “0”, “1”, “2”, etc.).
[0131] As previously described, since the first and second orders can be symmetric with respect to each other (or at least partially symmetric), the probability of the x-coordinate comprising a given value when the scanning order comprises the first scanning order can be identical or similar to the probability of the y coordinate comprising the same value when the scan order comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the x and y coordinates exchanged, respectively, when the scan order comprises the second scan order. Thus, statistics can also indicate the probability that each of the x and y coordinates will comprise a given value. In some examples, entropy decoding unit 70 can determine statistics using position information from the last significant coefficient for previously decoded video data blocks, such as x and y coordinate values and x and y coordinates exchanged for decoded blocks previously.
[0132] The entropy decoding unit 70 can also be configured to update the statistics based on the x and y coordinates and the swapped x and y coordinates, such that the probability of the x coordinate comprising a given value is updated using the x and y coordinates and the permuted y coordinate, y-coordinate comprising a given value is updated using the y-coordinate and the x-interchanged For example, the updated statistics can be used to decode position information from the last significant coefficient for blocks of video data subsequently decoded in the manner described above.
[0133] As an example, to decode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy decoding unit 70 can be configured to perform a context-adaptive entropy coding process (a CABAC process, for example), which includes the application, by the entropy decoding unit 70, of a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates, the swapped x and y coordinates and the scan order. As previously mentioned, in addition to CABAC, the techniques described for exchanging the x and y coordinates for coding purposes can also be used in other coding techniques by context-adaptive entropy, such as CAVLC, PIPE and other coding techniques adaptive to the context.
[0134] In this example, the entropy decoding unit 70 can use the scan order, such as, for example, the horizontal or vertical scan order, to select the specific context model that includes the statistics. That is, the entropy decoding unit 70 can select the same statistics to decode the x and y coordinates when using the first scan order to decode the block, and to decode the exchanged x and y coordinates when using the second scan order to decode the block. . In addition, in cases where one coordinate (the y coordinate, for example) is decoded after another coordinate (the x coordinate, for example), the entropy decoding unit 70 can encode the coordinate using the value of the other coordinate, coded earlier, as context. That is, the value of a previously decoded coordinate of the x and y coordinates or of the exchanged x and y coordinates, depending on the scan order used to decode the block, can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, understand a given value. The entropy decoding unit 70 can then use the selected statistics to decode the x and y coordinates and the x and y coordinates exchanged by performing context-adaptive entropy coding.
[0135] As also described previously, in this example the x and y coordinates and the exchanged x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries, that is, "binarized". Thus, to decode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy decoding unit 70 can encode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, may include probability estimates that indicate the probability of each binary code word corresponding to the coordinate comprising a given value (“0” or “1”, for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word. In some examples, the entropy decoding unit 70 can determine the probability estimates using the corresponding binary values for previously decoded video data blocks, such as, for example, code word binaries that correspond to x and y coordinates and interchanged x and y coordinates for previously decoded blocks, such as, for example, as part of determining statistics based on the position information of the last significant coefficient for previously decoded blocks, as described above. In other examples, the entropy decoding unit 70 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates and the exchanged x and y coordinates, as also described previously. . The entropy decoding unit 70 can use the probability estimates to decode each binary by performing context-adaptive entropy coding.
[0136] As another example, in some cases values other than a binary of a single codeword for one coordinate (x, for example) may result in different probability estimates for a corresponding binary of a single codeword for the other coordinate (y, for example). Therefore, when decoding a binary from a unary codeword to a coordinate using probability estimates that correspond to the binary, as described above, using probability estimates that include information about the value of a binary, for example , a corresponding binary, from one unary code word to the other coordinate can result in the accuracy of probability estimates and, therefore, can allow effective decoding. For example, the binary of the unary code word for the other coordinate can be a binary that corresponds to the binary of the unary code word for one coordinate, as, for example, the binaries can be located in the same binary positions, or positions of similar binary code, within their respective code words.
[0137] Decoding the x and y coordinates of the exchanged x and y coordinates (or x and y coordinates that indicate position information of the last significant coefficient for a block of video data, in general) in this "interleaved" way, using previously decoded binaries as contexts , can allow the use of mutual information of the respective x and y coordinates, which can allow more effective decoding of the coordinates.
[0138] In other examples, the entropy decoding unit 70 can be configured to decode interchanged x and y coordinates and interchanged x and y coordinates. In some examples, the entropy decoding unit 70 may be configured to encode individual binaries of the codeword for the respective x and y coordinates in an interspersed manner. In other examples, the entropy decoding unit 70 can be configured to encode binary groups of the codewords in an interspersed manner. For example, some code word binaries for each of the x and y coordinates can be decoded using a first coding mode (a regular coding mode, for example), while the remaining codeword binaries can be decoded using a second encoding mode (a bypass encoding mode, for example). Thus, the entropy decoding unit 70 can be configured to decode one or more binaries of the codeword that corresponds to one of the coded coordinates using the first coding mode (regular, for example) before decoding one or more binaries. of the codeword that corresponds to the other coded coordinate using the first coding mode, followed by the decoding of one or more binaries that corresponds to a coded coordinate using the second coding mode (by bypass, for example) before the decoding of one or more more binaries of the codeword that corresponds to the other coordinate encoded using the second encoding mode. In other examples, the entropy decoding unit 70 can be configured to decode the encoded word binaries using the second encoding mode together.
[0139] Therefore, separating the coding binary decoding for each of the x and y coordinates in the manner described above can allow the grouping of decoded binaries using a specific encoding mode (the bypass mode, for example) together, the which can improve the performance of the coding.
[0140] In other words, in cases where each of the x and y coordinates of the permuted x and y coordinates comprises a sequence of one or more binaries, the entropy decoding unit 70 can be configured to decode the x and y coordinates and the x and y coordinates exchanged by performing coding by adaptive entropy to the context, which includes applying the context model that includes the statistics based on one of the x and y coordinates of the exchanged x and y coordinates. The entropy decoding unit 70 can be configured to decode the respective x and y coordinates by decoding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the context model based, at least in part, on the value of at least one binary in the sequence that corresponds to the other coordinate. In addition, the entropy decoding unit 70 can be configured to decode the one or more binaries of the sequence that corresponds to one of the coordinates and the one or more binaries of the sequence that corresponds to the other coordinate in an interleaved manner.
[0141] Therefore, to decode the position information of the last significant coefficient, the entropy decoding unit 70 can be configured to decode interchanged x and y coordinates and interchanged x and y coordinates, using previously decoded binaries as contexts. That is, the entropy decoding unit 70 can be configured to decode each binary of a unary codeword for a given coordinate by executing a context-adaptive entropy coding process that includes applying a context model based on at least least one context, where the at least one context can include the position of the binary within the unary code word, as described above, and the value of one or more previously decoded binaries from one unary code word to the other coordinate. In addition, the entropy coding unit 56 can be configured to encode the x and y coordinates and the interchanged x and y coordinates in general, interchangeably.
[0142] It should be noted that, in other examples compatible with the techniques of this description, other types of code words can be used, such as, for example, truncated unified code words, exponential Golomb code words, concatenated code words, as well as combinations of different coding techniques. It should also be noted that, in some examples, the entropy decoding unit 70 can also be configured to decode the x and y coordinates when the scan order comprises a third scan order. For example, the third scan order may be different from the first scan order and the second scan order. As an example, the third scan order can be a zigzag scan order or a diagonal scan order, where the zigzag or diagonal scan order also originates from the common position within the block, such as the DC position, for example .
[0143] In this example, in some cases the entropy decoding unit 70 can also be configured to decode information that identifies the scan order, that is, the scan order information for the block. Alternatively, as previously described, the entropy decoding unit 70 may omit decoding scan order information for the block when the entropy decoding unit 70 uses an adaptive scan order to decode the block. In addition, in some cases the entropy decoding unit 70 can also be configured to decode information that identifies the positions of other significant coefficients within the block, that is, the position information of significant coefficients for the block.
[0144] For example, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators, as previously described. As also previously described, the position information of significant coefficients can be decoded by decoding each significant coefficient indicator in the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0145] Again, the context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy decoding unit 70 can determine the probability estimates using the values of significant coefficient indicators for previously decoded video data blocks. In other examples, the entropy decoding unit 70 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will understand a given value. For example, the updated probability estimates can be updated to decode position information of significant coefficients for blocks of video data subsequently decoded in the manner described above.
[0146] As another example, the entropy decoding unit 70 can be configured to decode x and y coordinates that indicate a position of the last significant coefficient within a specific block of video data according to a scan order associated with the block. For example, the scan order can be one of a series of scan orders, where each of the scan order series originates from a common position within the block, such as the DC position, for example.
[0147] In this example, to decode the x and y coordinates, the entropy decoding unit 70 can be configured to decode information that indicates whether the x coordinate corresponds to the common position, decode information that indicates whether the y coordinate corresponds to the common position and, in the case of x coordinate does not correspond to the common position and the y coordinate does not correspond to the common position, decode information that identifies the scan order. The entropy decoding unit 70 can also be configured to, in case the x coordinate does not correspond to the common position, decode the x coordinate based on the scan order and, in the case of the y coordinate does not correspond to the common position, decode the coordinate y based on the scan order.
[0148] In this example, to decode the x coordinate and the y coordinate based on the scan order, the entropy decoding unit 70 can be configured to perform a context-adaptive entropy coding process (a CABAC process, for example), which includes the application by the entropy decoding unit 70 of a context model based on at least one context. For example, the at least one context can include the scan order.
[0149] In addition, as an example, the entropy decoding unit 70 can be configured to decode one coordinate (the y coordinate, for example) after another coordinate (the x coordinate, for example), where the entropy decoding unit 70 it can be configured to decode one coordinate using the value of the other previously decoded coordinate as a context. As another example, where each of the x and y coordinates comprises a sequence of one or more binaries, the entropy decoding unit 70 can be configured to decode at least one binary of the sequence that corresponds to one of the coordinates by selecting model statistics of context based, at least in part, on the value of at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy decoding unit 70 can be configured to decode the one or more binaries of the sequence that corresponds to one of the coordinates and the one or more binaries of the sequence that corresponds to the other coordinate in an interleaved manner.
[0150] In any case, after decoding the position information of the last significant coefficient and, in some cases, the scan order information and the position information of significant coefficients, that is, the SM data, for the block as described above , the entropy decoding unit 70 can also decode the value of each significant coefficient (such as, for example, the magnitude and sign of each significant coefficient, indicated by the syntax elements “level_of_the_coef_menos1” and “indicator_of_sign_of_coef”, respectively) within the block.
[0151] Therefore, the techniques of this description may allow the entropy decoding unit 70 to decode the position information of the last significant coefficient for the block more effectively than when using other methods and may allow the entropy decoding unit 70 has less complexity compared to other systems. In this way, there can be a relative bit saving for the encoded bit stream that includes the position information of the last significant coefficient and a relative reduction in complexity for the entropy decoding unit 70 used to decode the information, when using the techniques of this description.
[0152] The motion compensation unit 72 can use motion vectors received in the bit stream to identify a prediction block in reference frames in memory 82. The intraprediction module 74 can use the intraprediction modes received in the bit stream to form a prediction block from spatially adjacent blocks.
[0153] The intraprediction module 74 may use an intrapredictive mode indication for the encoded block to intrapredict the encoded block, such as, for example, using pixels from neighboring decoded blocks. For examples in which the block is encoded by the interpretation mode, the motion compensation unit 72 can receive information that defines a motion vector, to retrieve motion compensated prediction data for the encoded block. In any event, the motion compensation unit 72 or the intraprediction module 74 can provide information that defines a prediction block to adder 80.
[0154] The inverse quantization unit 76 quantizes by inversion, that is, it disquantifies, the quantized block coefficients presented in the bit stream and decoded by the entropy decoding unit 70. The inverse quantization process may include a conventional, defined process, for example , by the H.264 decoding standard or performed by the HEVC Test Model. The inverse quantization process may also include the use of a quantization parameter QPy calculated by the video encoder 20 for each block to determine the degree of quantization and, similarly, the degree of inverse quantization that must be applied.
[0155] The inverse transform module 78 applies an inverse transform, such as, for example, an inverse DCT, an inverse integer transform or a conceptually similar inverse transform process, to the transform coefficients to produce residual blocks in the pixel domain. The motion compensation unit 72 produces compensated blocks in motion, possibly performing interpolation based on interpolation filters. Identifiers for the interpolation filters to be used to estimate movement with sub-pixel precision can be included in the syntax elements. The motion compensation unit 72 can use interpolation filters used by the video encoder 20 during the encoding of the video block to calculate interpolated values for integer sub-pixels of a reference block. The motion compensation unit 72 can determine the interpolation filters used by the video encoder 20 according to the syntax information received and use the interpolation filters to produce predictive blocks.
[0156] The motion compensation unit 72 uses some of the syntax information for the encoded block to determine the sizes of the blocks used to encode the encoded video sequence frame (s), partition information that describes how each block of a frame or slice of the encoded video sequence is partitioned, modes that indicate how each partition is encoded, one or more reference frames (and reference frame lists) for each intercoded block or partition (a) and other information to decode the sequence encoded video. The intraprediction module 74 can also use the syntax information for the coding block to intrapredict the coded block, such as, for example, using pixels from neighboring blocks, previously decoded, as described above.
[0157] The adder 80 adds the residual blocks with the corresponding prediction blocks generated by the motion compensation unit 72 or the intraprediction module 74 to form decoded blocks. If desired, an unlock filter can also be applied to filter decoded blocks to remove blocking artifacts. The decoded video blocks are then stored in memory 82, which provides reference blocks for subsequent motion compensation and also produces decoded video for presentation on a display device (such as the display device 32 in Figure 1).
[0158] In this way, the video decoder 30 represents an example of a video decoder configured to encode x and y coordinates that indicate a position of the last non-zero coefficient within a video data block according to a scan order associated with the block when the order The scan order comprises a first scan order and encode interchanged x and y coordinates that indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order scan is different from the first scan order.
[0159] Figures 4A-4C are conceptual diagrams showing an example of video data block and position information of significant coefficients and position information of the last corresponding significant coefficient. As shown in Figure 4A, a video data block, such as, for example, a macroblock or CU CU, can include quantized transform coefficients. As shown in Figure 4A, for example, block 400 can include quantized transform coefficients generated using prediction, transform and quantization techniques described above. Suppose, for this example, that block 400 has a size of 2Nx2N, where N is equal to two. Therefore, block 400 is 4x4 in size and includes sixteen quantized transform coefficients, as also shown in Figure 4A. Suppose also that the scan order associated with block 400 is the zigzag scan order, as shown in Figure 5A described in more detail below.
[0160] In this example, the last significant coefficient within block 400 according to the zigzag scan order is a transform coefficient quantized as equal to "1", located at position 406 within block 400. In other examples, as described above, a block can include more or less quantized transform coefficients than block 400. In yet another example, the scan order associated with block 400 can be a different scan order, such as a horizontal scan order, an order vertical scan order, a diagonal scan order, or another scan order.
[0161] Figure 4B shows an example of significant coefficient indicator data, that is, significant coefficient indicators in the form of a map, or a block, as previously described. In the example of Figure 4B, block 402 may correspond to block 400 shown in Figure 4A. In other words, the significant coefficient indicators in block 402 can correspond to the quantized transform coefficients in block 400. As shown in Figure 4B, the significant coefficient indicators in block 402 that are equal to "1" correspond to the significant coefficients in the block 400. Likewise, the significant coefficient indicators in block 402 that are equal to "0" correspond to the zero, or non-significant, coefficients in block 400.
[0162] In this example, the significant coefficient indicator for block 402 that corresponds to the last significant coefficient within block 400 according to the zigzag scanning order is a significant coefficient indicator equal to "1", located at position 408 within block 402 In other examples, the values of the significant coefficient indicators used to indicate significant or non-significant coefficients may vary (the indicators of significant coefficients equal to "0" may correspond to significant coefficients, and the indicators of significant coefficients equal to "1" may correspond to non-significant coefficients, for example).
[0163] Figure 4C shows an example of indicator data of the last significant coefficient, that is, indicators of the last significant coefficient represented in the form of a map or block, as also described previously. In the example of Figure 4C, block 404 may correspond to block 400 and block 402 shown in Figures 4A and 4B, respectively. In other words, the significant coefficient indicators in block 404 can correspond to the quantized transform coefficients in block 400 and the significant coefficient indicators in block 402.
[0164] As shown in Figure 4C, the indicator of the last significant coefficient of block 404 which is equal to "1", located at position 410 within block 404, corresponds to the last significant coefficient of block 400, and to the last of the significant coefficients indicators of the block 400 that are equal to "1", according to the zigzag scanning order. Likewise, the last significant coefficient indicators in block 404 that are equal to "0" (that is, all remaining remaining significant coefficient indicators) correspond to zero, or not significant, coefficients in block 404, and all indicators of significant coefficients of block 402 which are equal to "1" other than the last of such significant coefficient indicators according to the zigzag scanning order.
[0165] The values of the significant coefficient indicators used to indicate a last significant coefficient according to the scan order may vary (an indicator of the last significant coefficient equal to "0" may correspond to the last significant coefficient according to the scan order, and the indicators with the last significant coefficient equal to "1" can correspond to all the remaining coefficients, for example). In any case, the significant coefficient indicators in block 402, and the last significant coefficient indicators in block 404 can be collectively referred to as SM data for block 400.
[0166] As described above, the position information of significant coefficients for a block of video data can be indicated by the serialization of significant coefficient indicators for the block of a two-dimensional block representation, as presented in block 402 shown in Figure 4B, in a unidimensional arrangement, using a scan order associated with the block. In the example of blocks 400-402 shown in Figures 4A-4B, assuming the zigzag scanning order again, the position information of significant coefficients for block 400 can be indicated by serializing the significant coefficient indicators of block 402 in a one-dimensional arrangement. That is, the position information of significant coefficients for block 400 can be indicated by the generation of a sequence of significant coefficient indicators for block 402 according to the zigzag scanning order.
[0167] In this example, the generated sequence can correspond to the value "111111”, which represents the first 6 significant coefficient indicators in block 402 according to the zigzag scanning order. It should be noted that the generated sequence can contain significant coefficient indicators that correspond to a range of block positions within block 400, starting from the first block position in the zigzag scan order (ie, the DC position) and ending with a block position that corresponds to the last significant coefficient of block 400 according to the zigzag scanning order (that is, which corresponds to the indicator of the last significant coefficient equal to "1" in block 404).
[0168] Also as described above, the position information of the last significant coefficient for the block can be indicated by serializing indicators of the last significant coefficient for the block from a two-dimensional block representation, as shown in block 404 shown in Figure 4C, in a one-dimensional arrangement, using the scan order associated with the block. In the example of blocks 400-404 shown in Figures 4A-4C, again assuming the scan order in zigzag, the position information of the last significant coefficient for block 400 can be indicated by serializing the indicators of the last significant coefficient of block 404 in a one-dimensional arrangement. That is, the position information of the last significant coefficient for block 400 can be indicated by generating a sequence of indicators of the last significant coefficient of block 404 according to the zigzag scanning order. In this example, the generated sequence can correspond to the value "000001”, which represents the first 6 indicators of the last significant coefficient of block 404 according to the zigzag scanning order.
[0169] Once again, it should be noted that the generated sequence may contain indicators of the last significant coefficient that correspond to a range of block positions within block 400, starting from the first block position in the zigzag scanning order and ending with the position block that corresponds to the last significant coefficient of block 400 according to the zigzag scan order (that is, that corresponds to the last significant coefficient indicator equal to "1" in block 404). Therefore, in this example, no indicator last significant coefficient following the last significant coefficient indicator equal to "1" according to the zigzag scanning order is included in the sequence. Generally speaking, it may not be necessary for the indicators of the last significant coefficient that follow the indicator of the last significant coefficient equal to "1" according to the scan order associated with a block of video data to indicate position information of the last significant coefficient for the block, therefore, in some examples, these indicators are omitted from the sequence generated by indicators of the last significant coefficient used to indicate the information.
[0170] It should also be noted that, as described above, if the last significant coefficient is located within the last block position according to the scan order (the base right block position, for example), the generated sequence may not include an indicator of the last significant coefficient corresponding to the last block position, since it can be inferred that the position contains the last significant coefficient for the block. Therefore, in this example, the generated sequence may correspond to the value "000000000000000”, where the indicator of the last significant coefficient corresponding to the last block position is not included in the sequence and is, by inference, equal to "1".
[0171] Figures 5A-5C are conceptual diagrams showing examples of scanned video data blocks using a zigzag scan order, a horizontal scan order and a vertical scan order, respectively. As shown in Figures 5A-5C, an 8x8 block of video data, such as a macroblock or CU CU, can include sixty-four transform coefficients quantized in corresponding block positions, denoted with circles. For example, blocks 500-504 can each include sixty-four quantized transform coefficients generated using prediction, transform and quantization techniques described above, where, once again, each corresponding block position is denoted with a circle . Suppose, for this example, that blocks 500-504 have a size of 2Nx2N, where N is equal to four. Therefore, blocks 500-504 are 8x8 in size.
[0172] As shown in Figure 5A, the scan order associated with block 500 is the zigzag scan order. The zigzag scan order scans the quantized transform coefficients of block 500 diagonally, as indicated by the arrows in Figure 5A. Similarly, as shown in Figures 5B and 5C, the scan orders associated with blocks 502 and 504 are the horizontal scan order and vertical scan order, respectively. The horizontal scan order scans the quantized transform coefficients of block 502 in the horizontal direction line by line or in the “tracking” manner, while the vertical scan order scans the quantized transform coefficients in block 504 in the vertical direction line by line, or in the form of “rotated tracking”, also as indicated by the arrows in Figures 5B and 5C.
[0173] In other examples, as described above, a block may be of a size that is smaller or larger than the size of blocks 500-504 and may include more or less quantized transform coefficients and corresponding block positions. In these examples, the scan order associated with the block can scan the quantized transform coefficients of the block substantially as shown in the examples of blocks 8x8 500-504 of Figures 5A-5C, such as, for example, a 4x4 block or a 16x16 block can be scanned by following any of the scan orders described above.
[0174] As previously described, the techniques in this description can also apply to a wide variety of other scan orders, including a diagonal scan order, scan orders that are combinations of zigzag, horizontal, vertical and / or diagonal scan orders. , as well as sweep orders that are partly zigzag, partly horizontal, partly vertical and / or partly diagonal. In addition, the techniques in this description may also consider a scan order that is itself adaptive based on statistics associated with previously encoded blocks of video data (blocks that have the same block size or the same encoding mode as the current block being encoded, for example). For example, an adaptive scan order may be the scan order associated with a block of video data, in some cases.
[0175] Figures 6A-6C are conceptual diagrams showing examples of video data blocks for which position information of the last significant coefficient is encoded based on scan order information, in accordance with the techniques of this description. As shown in Figure 6A, block 600 may include sixteen block positions ordered from 0 to 15 according to the horizontal scan order, as indicated by the arrows and described above with reference to Figure 5B. Each of the sixteen block positions can contain a quantized transform coefficient, as described above with reference to Figure 4 A. Also as shown in Figure 6 A, the third position within block 600 according to the horizontal scan order that corresponds to to position "2", it can be referred to as position 606. In this example, position 606 can be represented using x and y coordinates (2,0), where the x coordinate is equal to "2", the y coordinate is equal to " 0 ”, the reference position, or" origin ", which corresponds to the x and y coordinates (0,0), is located in the top left corner of block 600, that is, the DC position, as described above. Suppose, for this example, that position 606 corresponds to the position of the last significant coefficient within block 600 according to the horizontal scan order.
[0176] Suppose also that, for block 600, there are statistics that indicate the probability that a given position within block 600 corresponds to the position of the last significant coefficient within block 600 according to the horizontal scan order. In particular, statistics can indicate the probability of a coordinate, such as the x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the horizontal scan order, to understand a given value (such as , for example, "0", "1", "2", etc.). In other words, the statistics can indicate the probability that each of the x and y coordinates (2,0) described above comprises a given value.
[0177] In addition, in some examples the x and y coordinates can be coded based on statistics, such as, for example, by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model which includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates and the scan order. In this example, the scan order, such as the horizontal scan order, can be used to select the specific context model that includes the statistics. In addition, in cases where one coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the coordinate can be encoded using the value of the other coordinate, previously encoded, as a context. That is, the value of a previously coded coordinate of the x and y coordinates can be used to also select statistics within the context model that indicate the probability that the other currently coded coordinate will comprise a given value.
[0178] In addition, in some examples the x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries, that is, “binarized”. Therefore, to encode the x and y coordinates based on the statistics, each binary of a code word that corresponds to a specific coordinate can be encoded by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, may include probability estimates that indicate the probability of each binary code word corresponding to the coordinate comprising a given value (“0” or “1”, for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word.
[0179] In the examples in Figures 6 A-6B, the horizontal scan order of block 600 can be symmetrical with respect to the vertical scan order of block 602, such that the probability of the x coordinate, "2", of the position of the last significant coefficient within of block 600 according to the horizontal scan order comprising a given value may be identical or similar to the probability of the y coordinate, "2", of the position of the last significant coefficient within block 602 according to the vertical scan order comprising the same value, and vice versa. Likewise, the probability of the y coordinate, "0", of the position of the last significant coefficient within the block 600 according to the horizontal scan order comprising a given value can be identical or similar to the probability of the x coordinate, "0" , the position of the last significant coefficient within block 602 according to the vertical scan order comprises the same value, and vice versa. That is, the x and y coordinates (2,0) of position 606 within block 600 may each have the same probability, or similar probability, of understanding the given value the permuted x and y coordinates (0.2) of position 608 within block 602, respectively. As indicated by the dashed line in Figure 6B, the permuted x and y coordinates (0.2) of position 608 within block 602 can correspond to position 610 within block 602, which can be represented using the x and y coordinates (2.0).
[0180] Therefore, according to the techniques of this description, common statistics that indicate the probability of a given position within block 600, which corresponds to the position of the last significant coefficient within block 600 according to the horizontal scan order can be used to encode the x and y coordinates (2.0) of position 606 within block 600, as well as the permuted x and y coordinates (0.2) of position 608 within block 602, as previously described.
[0181] As also shown in Figure 6C, block 604 can also include sixteen block positions, again ordered from 0 to 15, although in this case, according to a zigzag scan order, as indicated by the arrows, and described above with reference to Figure 5A. Each of the sixteen block positions can contain a quantized transform coefficient, as described above with reference to Figure 4A. Also as shown in Figure 6C, the third position within block 604 according to the zigzag scanning order, which corresponds to position "2", can be referred to as position 612. In this example, position 612 can be represented using coordinates x and y (0.1), where the x coordinate is equal to "0", the y coordinate is equal to "1", and the reference position, or "origin", which corresponds to the x and y coordinates (0.0 ), is again located in the top left corner of block 604, that is, in the DC position, as described above. Suppose, for this example, that position 612 corresponds again to the position of the last significant coefficient within block 604 according to the zigzag scanning order.
[0182] In the example in Figure 6C, the zigzag scan order of block 604 may not be symmetrical with respect to the horizontal scan order or vertical scan order of blocks 600 and 602, respectively. Thus, the identity or similarity of the probabilities described above may not exist between the x and y coordinates that correspond to the position of the last significant coefficient within block 600 or block 602, and between the x and y coordinates that correspond to the position of the last significant coefficient within the block 604. However, the x and y coordinates that correspond to the position of the last significant coefficient within block 604 can be encoded using the common statistics described above with reference to the examples in Figures 6 A-6B. For example, although the use of common statistics to encode x and y coordinates may not accurately reflect the likelihood that the respective coordinates will comprise specific values, encoding the coordinates in this way can nevertheless improve the coding efficiency by using common statistics, in instead of separate statistics, thus potentially reducing the complexity of the system, as previously described.
[0183] Figure 7 is a flowchart showing an example of a method to effectively encode position information for the last significant coefficient based on information on scan order, compatible with the techniques of this description. The techniques in Figure 7 can generally be performed by a processing unit or processor, whether it is (e) implemented in hardware, software, firmware or a combination of them and, when implemented in software or firmware, the hardware correspondent can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques in Figure 7 are described with respect to video encoder 20 (Figures 1 and 2) and / or video decoder 30 (Figures 1 and 3), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 7 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0184] Initially, video encoder 20 and / or video decoder 30 can determine whether the scan order associated with a video data block is a first scan order or a second scan order (700). For example, the block can be a macroblock or a CU CU, as previously described. In addition, the first scan order and the second scan order can be symmetrical with respect to each other (or at least partially symmetrical). For example, the first scan order can be a horizontal scan order and the second scan order can be a vertical scan order, where the horizontal scan order and the vertical scan order originate from a position within the block , such as the DC position, for example, as also described previously.
[0185] Specifically, the first scan order and the second scan order can each be a scan order that can be used by video encoder 20 and / or video decoder 30 to encode the block. For example, the first and second scan orders can be scan orders used by video encoder 20 to encode blocks of video data and by video decoder 30 to decode blocks within the corresponding encoding system 10 comprising the encoder video 20 and video decoder 30. In some examples, the first and second scan orders may be just a few of the scan orders used within system 10 to encode the blocks. In other examples, the first and second scan orders may be the only scan orders used within system 10 to code the blocks. In this way, the exemplary method of Figure 7 can be applied to any encoding system that uses a series of scan orders to encode blocks of video data.
[0186] The video encoder 20 can determine whether the scan order is the first scan order or the second scan order directly, as part of the block encoding, for example. The video decoder 30 can make this determination by decoding scan order information for the block. For example, video encoder 20 can encode scan order information as described in more detail in the exemplary method of Figure 8, and video decoder 30 can decode information as also described in more detail in the exemplary method of Figure 9.
[0187] In case the scan order is the first scan order (702), the video encoder 20 and / or the video decoder 30 can also encode x and y coordinates that indicate a position of the last significant coefficient within the block according to the order sweep (704), that is, the position information of the last significant coefficient for the block. In case the scan order is the second scan order (702), however, video encoder 20 and / or video decoder 30 can instead encode x and y coordinates exchanged within the block according to the order of scan (706). In this example, the exchanged x and y coordinates also correspond to the position information of the last significant coefficient for the block, but are also processed, that is, exchanged, by the video encoder 20 and / or the video decoder 30 to allow the encoding of the information more effectively than when using other techniques, as previously described. Specifically, the permuted x and y coordinates can allow the use of common statistics to encode the x and y coordinates and the exchanged x and y coordinates that indicate the position information of the last significant coefficient for the block, as also described previously. In any case, the position information of the last significant coefficient for the block, either represented using the x and y coordinates or using the exchanged x and y coordinates, can be encoded in the case of the video encoder 20 and decoded in the case of the video decoder 30.
[0188] To encode the x and y coordinates and the interchanged x and y coordinates, the video encoder 20 and / or the video decoder 30 can also determine statistics that indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the agreement block. with the scan order, when the scan order comprises the first scan order. In particular, statistics can indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order, understand a given value (such as "0", "1", "2", etc.). In other words, the statistics can indicate the probability of each of the x and y coordinates described previously understanding a given value.
[0189] Since the first and second scan orders can be symmetrical with respect to each other (or at least partially symmetric), the probability of the x-coordinate comprising a given value when the scan order comprises the first scan order can be identical or similar to the probability of the y coordinate comprising the same value when the scan order comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the x and y coordinates exchanged, respectively, when the scan order comprises the second scan order. Thus, the statistics can also indicate the probability that each of the x and y coordinates exchanged will comprise a given value. In some examples, the video encoder 20 and / or the video decoder 30 can determine the statistics using position information from the last significant coefficient for previously encoded video data blocks, such as the x and y coordinate values of the coordinates x and y exchanged for the previously encoded blocks.
[0190] In this example, video encoder 20 and / or video decoder 30 can encode the x and y coordinates and x and y coordinates exchanged based on the statistics. For example, the video encoder 20 and / or the video decoder 30 can encode the x and y coordinates and the x and y permuted coordinates based on the statistics such that the probability of the x coordinate comprising a given value is used to encode the x coordinate and the exchanged y coordinate and the probability that the y coordinate comprises a given value is used to encode the y coordinate and the permuted x coordinate. In addition, the video encoder 20 and / or the video decoder 30 can update the statistics based on the x and y coordinates and the exchanged x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. In this example, the probability of the x coordinate comprising a given value can be updated using the permuted x coordinate and y coordinate, and the probability of the y coordinate comprising a given value can be updated using the exchanged y coordinate and x coordinate. For example, the updated statistics can be used to encode position information of the last significant coefficient for blocks of video data subsequently encoded in the manner described above.
[0191] In some examples, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the video encoder 20 and / or the video decoder 30 may perform a context-adaptive entropy encoding process (a CABAC process, for example) , which includes applying a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates, the swapped x and y coordinates and the scan order. In this example, video encoder 20 and / or video decoder 30 can use the scan order, such as the horizontal or vertical scan order, to select the specific context model that includes the statistics. That is, the video encoder 20 and / or the video decoder 30 can select the same statistics to encode the x and y coordinates when using the first scan order to encode the block, and to encode the interchanged x and y coordinates when using the second order scan code to code the block.
[0192] In addition, in cases where a coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the video encoder 20 and / or the video decoder 30 can encode the coordinate using the value of the other coordinate, previously coded, as context. That is, the value of a previously coded coordinate of the x and y coordinates or of the x and y interchanged coordinates, depending on the scan order used to code the block, can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, understand a given value. The video encoder 20 and / or the video decoder 30 can then use the selected statistics to encode the x and y coordinates and the x and y coordinates exchanged by performing context-adaptive entropy coding.
[0193] In this example, the x and y coordinates and the exchanged x and y coordinates can each be represented using a unary codeword that comprises a sequence of one or more bits, or binary, that is, binary. Therefore, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the video encoder 20 and / or the video decoder 30 can encode each binary of a codeword that corresponds to a specific coordinate by performing coding by adaptive entropy to the context. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include probability estimates for each codeword binary, depending on the position of the respective binary within the codeword. In some examples, the video encoder 20 and / or the video decoder 30 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, binary code words that match a x and y coordinates and x and y coordinates exchanged for previously coded blocks, as, for example, as part of determining statistics based on the position information of the last significant coefficient for the previously coded blocks, as described above. In other examples, the video encoder 20 and / or the video decoder 30 may also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates and the x and y coordinates exchanged, as also described previously. The video encoder 20 and / or the video decoder 30 can use the probability estimates to encode each binary by performing context-adaptive entropy coding.
[0194] As another example, the video encoder 20 and / or the video decoder 30 can encode the x and y coordinates and the x and y coordinates exchanged by encoding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the model of context based, at least in part, on the value of at least one torque, such as a corresponding torque, of the sequence that corresponds to the other coordinate. In addition, the video encoder 20 and / or the video decoder 30 can encode the one or more binaries of the sequence corresponding to one of the coordinates and the one or more binaries of the sequence corresponding to the other coordinate in an interleaved manner.
[0195] Finally, in some examples, the video encoder 20 and / or the video decoder 30 can encode information that indicates the positions of all other significant coefficients within the block according to the scan order (708), that is, the position information of significant coefficients for the block. For example, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators, as previously described. As also previously described, the position information of significant coefficients can be encoded by coding each significant coefficient indicator of the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0196] In this example, the context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the video encoder 20 and / or the video decoder 30 can determine the probability estimates using the corresponding significant coefficient indicator values for previously encoded video data blocks. In other examples, the video encoder 20 and / or the video decoder 30 can also update the probability estimates using the value of each indicator to reflect the probability of the indicator comprising a given value. For example, updated probability estimates can be used to encode position information of significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0197] In this way, the method of Figure 7 represents an example of a method for encoding x and y coordinates that indicate the position of the last non-zero coefficient within a block of video data according to a scan order associated with the block when the scan order comprises a first scan order, and code interchanged x and y coordinates that indicate a position of the last nonzero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0198] Figure 8 is a flowchart showing an example of a method to effectively encode position information for the last significant coefficient based on scan order information for a video data block, compatible with the techniques of this description. The techniques in Figure 8 can generally be performed by a processing unit or processor, whether it is (e) implemented in hardware, software, firmware or a combination of them and, when implemented in software or firmware, the hardware correspondent can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques of Figure 8 are described with respect to the entropy coding unit 56 (Figure 2), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 8 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0199] Initially, the entropy coding unit 56 can generally be performed by a processing unit or processor, whether it is implemented in hardware, software, firmware or a combination thereof, and when implemented in software or firmware, the corresponding hardware can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques in Figure 7 are described with respect to video encoder 20 (Figures 1 and 2) and / or video decoder 30 (Figures 1 and 3), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 7 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0200] Initially, the coded will generally be performed by a processing unit or processor, whether it is implemented in hardware, software, firmware or a combination of them, and when implemented in software or firmware, the corresponding hardware can be presented to perform instructions for the software or firmware. For purposes of example, the techniques in Figure 7 are described with respect to video encoder 20 (Figures 1 and 2) and / or video decoder 30 (Figures 1 and 3), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 7 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description. Initially, the coding of the block coding, for example.
[0201] In case the scan order is the first scan order (806), the entropy coding unit 56 can also code the x and y coordinates (808). In case the scan order is the second scan order (806), however, the entropy coding unit 56 can instead interchange the x and y coordinates and encode the exchanged x and y coordinates (8010). As previously described, the exchanged x and y coordinates also correspond to the position information of the last significant coefficient for the block, but are also processed, that is, exchanged, by the entropy coding unit 56 to allow the information to be coded more effectively than than when using other techniques. Specifically, the permuted x and y coordinates can allow the use of common statistics to encode the x and y coordinates and the exchanged x and y coordinates that indicate the position information of the last significant coefficient for the block, as also described previously. In any case, the entropy coding unit 56 can encode the position information of the last significant coefficient for the block, represented either as the x and y coordinates, or as the exchanged x and y coordinates.
[0202] To encode the x and y coordinates and the interchanged x and y coordinates, the entropy coding unit 56 can also determine statistics that indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order. In particular, statistics can indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order, understand a given value (such as "0", "1", "2", etc.). In other words, the statistics can indicate the probability of each of the x and y coordinates described previously understanding a given value.
[0203] Since the first and second orders can be symmetrical with respect to each other (or at least partially symmetrical), the probability of the x-coordinate comprising a given value when the scan order comprises the first scan order can be identical or similar to probability of the y coordinate to understand the same value when the scan order comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the x and y coordinates exchanged, respectively, when the scan order comprises the second scan order. Thus, the statistics can also indicate the probability that each of the x and y interchanged coordinates will comprise a given value. In some examples, the entropy coding unit 56 can determine the statistics using the position information of the last significant coefficient for previously encoded video data blocks, such as the values of the x and y coordinates and the x and y coordinates exchanged for the blocks previously coded.
[0204] In this example, the entropy coding unit 56 can encode the x and y coordinates and x and y coordinates exchanged based on the statistics. For example, entropy coding unit 56 can encode x and y coordinates and permuted x and y coordinates based on statistics such that the probability of the x coordinate comprising a given value is used to encode the x coordinate and the exchanged y coordinate and the probability of the y coordinate comprising a given value is used to encode the y coordinate and the permuted x coordinate. In addition, the entropy coding unit 56 can update statistics based on the x and y coordinates and the exchanged x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. In this example, the probability of the x coordinate comprising a given value can be updated using the permuted x coordinate and y coordinate, and the probability of the y coordinate comprising a given value can be updated using the exchanged y coordinate and x coordinate. For example, the updated statistics can be used to encode position information of the last significant coefficient for blocks of video data subsequently encoded in the manner described above.
[0205] In some examples, to code the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy coding unit 56 can perform a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a model context that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates, the swapped x and y coordinates and the scan order. In this example, the entropy coding unit 56 can use the scan order, such as the horizontal or vertical scan order, to select the specific context model that includes the statistics. That is, the entropy coding unit 56 can select the same statistics to encode the x and y coordinates when using the first scan order to encode the block, and to encode the exchanged x and y coordinates when using the second scan order to encode the block. .
[0206] In addition, in cases where one coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the entropy coding unit 56 can encode the coordinate using the value of the other coordinate, coded earlier, as context. That is, the value of a previously coded coordinate of the x and y coordinates or of the x and y interchanged coordinates, depending on the scan order used to code the block, can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, understand a given value. The entropy coding unit 56 can then use the selected statistics to encode the x and y coordinates and the x and y coordinates exchanged by performing context-adaptive entropy coding.
[0207] In this example, the x and y coordinates and the exchanged x and y coordinates can each be represented using a unary codeword that comprises a sequence of one or more bits, or binary, that is, binary. Thus, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy coding unit 56 can encode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include probability estimates for each codeword binary, depending on the position of the respective binary within the codeword. In some examples, the entropy coding unit 56 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, code word binaries that correspond to x and y coordinates and interchanged x and y coordinates for previously coded blocks, such as, for example, as part of determining statistics based on the position information of the last significant coefficient for previously coded blocks, as previously described. In other examples, the entropy coding unit 56 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates and the exchanged x and y coordinates, as also described previously. . The entropy coding unit 56 can use the probability estimates to encode each binary by performing context-adaptive entropy coding.
[0208] As described earlier, as another example, the entropy coding unit 56 can encode the x and y coordinates and the x and y coordinates exchanged by encoding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the context model with base, at least in part, worth at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy coding unit 56 can encode the one or more binaries of the sequence that correspond to one of the coordinates and the one or more binaries of the sequence that correspond to the other coordinate in an interspersed manner.
[0209] In any case, the entropy coding unit 56 can also encode information that identifies the scan order (812), that is, the scan order information for the block. In some examples, in which the scan order includes one of two scan orders used within system 10 to encode blocks of video data, the entropy coding unit 56 can encode scan order information using a single binary. For example, entropy coding unit 56 can encode the single binary to indicate whether the scan order is a first scan order (bin = "0") or a second scan order (bin-T). In other examples, where the scan order includes one of three scan orders that can be used by system 10 to encode blocks of video data, the entropy encoding unit 56 can encode scan order information using between a and two binaries. For example, entropy coding unit 56 can encode a first binary to indicate whether the scan order is a first scan order (such as, for example, bin1 = “0”, if the scan order is the first scan order) scan, and bin1 = “1” otherwise). In case the first binary indicates that the scan order is not the first scan order, entropy coding unit 56 can encode a second binary to indicate whether the scan order is a second scan order (bin2- “0” , for example) or a third scan order (bin2 = “1”, for example). In other examples, other methods can be used to encode scan order information for the block, which include the use of other torque values. In some examples, the entropy coding unit 56 can signal each binary directly in the bit stream.
[0210] In other examples, the entropy coding unit 56 may also encode each binary using a context-adaptive entropy coding process (a CABAC process, for example) in a manner similar to that described above with reference to the coding of a word binary. code that corresponds to one of the x and y coordinates of the exchanged x and y coordinates. Alternatively, as previously described, entropy coding unit 56 may omit encoding scan order information for the block when entropy coding unit 56 uses an adaptive scan order to encode the block.
[0211] In some examples, the entropy coding unit 56 may also encode information that indicates the positions of all other significant coefficients within the block according to the scan order (814), that is, the position information of significant coefficients for the block. As previously described, for example, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators. Also as previously described, the position information of significant coefficients can be encoded by coding each significant coefficient indicator in the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0212] The context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy coding unit 56 can determine the probability estimates using the corresponding significant coefficient indicator values for previously encoded video data blocks. In other examples, the entropy coding unit 56 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will comprise a given value. For example, updated probability estimates can be used to encode position information of significant coefficients for blocks of video data encoded next in the manner described above.
[0213] Finally, the entropy coding unit 56 may stop encoding the position information of the last significant coefficient based on the scan order information for the block (818). For example, the entropy coding unit 56 can proceed to other coding tasks, such as, for example, the coding of other syntax elements for the block or a subsequent block, as described above.
[0214] In this way, the method of Figure 8 represents a method for encoding x and y coordinates that indicate a position of the last non-zero coefficient within a block of video data according to a scan order associated with the block when the scan order comprises a first scan order and code interchanged x and y coordinates that indicate a position of the last non-zero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0215] Figure 9 is a flowchart showing an example of a method for effectively decoding position information from the last significant coded coefficient based on scan order information for a video data block, compatible with the techniques of this description. The techniques in Figure 9 can generally be performed by a processing unit or processor, whether it is (e) implemented in hardware, software, firmware or a combination of them and, when implemented in software or firmware, the hardware correspondent can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques of Figure 9 are described with respect to the entropy decoding unit 70 (Figure 3), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 9 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0216] Initially, the entropy decoding unit 70 can receive significant data encoded for a video data block (900). For example, the block can be a macroblock or a CU TU, as previously described. The entropy decoding unit 70 can also decode the significant data to determine coordinates that indicate a position of the last significant coefficient within the block according to a scan order associated with the block (902), that is, the position information of the last significant coefficient for the block. The scan order can be a scan order used by an entropy coding unit, such as, for example, the entropy coding unit 56 of Figure 2, to encode the block, and it can be one of a series of scan orders. that originate in a common position within the block, as previously described. Also as previously described, the common position can correspond to the DC position. In addition, the determined coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries.
[0217] As described above with reference to the example in Figure 8, the determined coordinates can correspond to x and y coordinates or interchanged x and y coordinates that indicate a position of the last significant coefficient within the block according to the scan order, depending on the scan order. For example, the coordinates can correspond to the x and y coordinates when the scan order comprises a first scan order, and to the exchanged x and y coordinates when the scan order comprises a second scan order. The x and y coordinates and the permuted x and y coordinates correspond to the position information of the last significant coefficient for the block, but the exchanged x and y coordinates are also processed, that is, exchanged, to allow the information to be encoded more effectively than when using others. techniques. Specifically, the permuted x and y coordinates can allow the use of common statistics to encode the x and y coordinates and the exchanged x and y coordinates that indicate the position information of the last significant coefficient for the block, as also described previously.
[0218] In any case, in a manner similar to that described above with reference to the example of the entropy coding unit 56 of Figure 8, to decode the significant data to determine the coordinates, the entropy decoding unit 70 can also determine statistics that indicate the probability whether a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order. In particular, statistics can indicate the probability of a coordinate, such as the x coordinate or the y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, when the scan order comprises the first scan order, understanding a given value (such as "0", "1", "2", etc.). In other words, statistics can indicate the probability of each of the coordinates described above understand a given value.
[0219] Since the first and second scan orders can be symmetrical with respect to each other (or at least partially symmetric), the probability of the x-coordinate comprising a given value when the scan order comprises the first scan order can be identical or similar to the probability of the y coordinate comprising the same value when the scan order comprises the second scan order, and vice versa. Likewise, the probability of the y coordinate to comprise a given value when the scan order comprises the first scan order can be identical or similar to the probability of the x coordinate to comprise the same value when the scan order comprises the second scan order, and vice versa. That is, the x and y coordinates, when the scan order comprises the first scan order, can each have the same probability, or similar probability, of understanding the given value of the x and y coordinates exchanged, respectively, when the scan order comprises the second scan order. Thus, statistics can also indicate the probability that each of the x and y coordinates will comprise a given value. In some examples, the entropy decoding unit 70 can determine the statistics using the position information of the last significant coefficient for previously encoded video data blocks, such as the values of the x and y coordinates and the x and y coordinates exchanged for the blocks previously coded.
[0220] In this example, the entropy decoding unit 70 can decode the significant data to determine the coordinates, i.e., the x and y coordinates, or the x and y coordinates, based on the statistics. For example, the entropy decoding unit 70 can decode the significant data to determine the x and y coordinates or the x and y coordinates exchanged based on the statistics such that the probability of the x coordinate comprising a given value is used to decode the significant data to determine the x coordinate and the permuted y coordinate and the probability of the y coordinate comprising a given value is used to decode the significant data to determine the y coordinate and the permuted x coordinate. In addition, the entropy decoding unit 70 can update the statistics based on the x and y coordinates and the exchanged x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. In this example, the probability of the x coordinate comprising a given value can be updated using the permuted x coordinate and y coordinate, and the probability of the y coordinate comprising a given value can be updated using the exchanged y coordinate and x coordinate. For example, the updated statistics can be used to decode significant data to determine position information of significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0221] In some examples, to decode meaningful data to determine the x and y coordinates or the x and y coordinates exchanged based on the statistics, the entropy decoding unit 70 can perform a context-adaptive entropy coding process (a CABAC process, for example) , which includes applying a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates, the swapped x and y coordinates and the scan order. In this example, the entropy decoding unit 70 can use the scan order, such as, for example, the horizontal or vertical scan order, to select the specific context model that includes the statistics. That is, the entropy decoding unit 70 can select the same statistics to decode the significant data to determine the x and y coordinates when using the first scan order to decode the block, and to determine the exchanged x and y coordinates when using the second order of scanning. scan to decode the block.
[0222] The x and y coordinates and the exchanged x and y coordinates can each be represented using a unary code word comprising a sequence of one or more binaries, i.e., binary. Therefore, to decode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy decoding unit 70 can decode the significant data to determine each binary of a codeword that corresponds to a specific coordinate by performing entropy coding adaptive to the context. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word. In some examples, the entropy decoding unit 70 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, binary code words that correspond to x and y coordinates and interchanged x and y coordinates for previously coded blocks, such as, for example, as part of determining statistics based on the position information of the last significant coefficient for previously coded blocks, as previously described. In other examples, the entropy decoding unit 70 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates and the exchanged x and y coordinates, as also described previously. . The entropy decoding unit 70 can use the probability estimates to decode the significant data to determine each binary by performing context-adaptive entropy coding.
[0223] As previously described, as another example, the entropy decoding unit 70 can decode the x and y coordinates and the x and y coordinates by decoding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the context model based on , at least in part, worth at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy decoding unit 70 can decode the one or more binaries of the sequence that corresponds to one of the coordinates and the one or more binaries of the sequence that corresponds to the other coordinate in an interspersed manner.
[0224] The entropy decoding unit 70 can also receive scanned order data encoded for the block (904). The entropy decoding unit 70 can also decode the scan order data to determine information that identifies the scan order (906), i.e., scan order information for the block. Alternatively, as previously described, the entropy decoding unit 70 may not receive and decode the scanned order data encoded for the block when the entropy decoding unit 70 uses an adaptive scan order to decode the block. In any case, the entropy decoding unit 70 can also determine whether the scan order is a first scan order or a second scan order (908). For example, the first and second scan orders can be scan orders that can be used by entropy decoding unit 70 to decode blocks of video data within the corresponding encoding system 10 comprising video encoder 20 and decoder video 30, as previously described. The first and second scan orders can be just a few of the scan orders that can be used within system 10 to code the blocks. In other examples, the first and second scan orders may be the only scan orders used within system 10 to code the blocks. In some cases, the first and second scan orders can be symmetrical with respect to each other (or at least partially symmetrical). For example, the first scan order can be a horizontal scan order, and the second scan order can be a vertical scan order. The entropy decoding unit 70 can determine whether the scan order is the first scan order or the second scan order using the scan order information determined for the block.
[0225] In case the scan order is the first scan order (910), the entropy decoding unit 70 can continue to decode the block using the determined x and y coordinates. In some cases, the entropy decoding unit 70 may also receive significant remaining encoded data for the block (914). The entropy decoding unit 70 can also decode the remaining significant data to determine information indicating the positions of all other significant ones within the block according to o (916), i.e., position information of significant coefficients for the block. As previously described, for example, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators. As also described previously, the remaining significant data can be decoded to determine the position information of significant coefficients by decoding the remaining significant data to determine each significant coefficient indicator in the sequence by performing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0226] The context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy decoding unit 70 can determine the probability estimates using the corresponding significant coefficient indicator values for previously encoded video data blocks. In other examples, the entropy decoding unit 70 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will understand a given value. For example, the updated probability estimates can be used to decode the remaining significant data to determine position information for significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0227] In case the scan order is the second scan order (910), however, the entropy decoding unit 70 can interchange the determined x and y coordinates (912) and continue to decode the block using the exchanged x and y coordinates in a similar way as described above with reference to steps (914) and (916). As previously described, the x and y coordinates and the exchanged x and y coordinates correspond to the position information of the last significant coefficient for the block, but the exchanged x and y coordinates are also processed to allow the information to be coded more effectively than when using other techniques.
[0228] Finally, the entropy decoding unit 70 may stop decoding the position information of the last significant coefficient based on the scan order information for the block (918). For example, the entropy decoding unit 70 can proceed with other decoding tasks, such as, for example, decoding other syntax elements for the block, or a subsequent block, as described above.
[0229] In this way, the method of Figure 9 represents an example of a method for encoding x and y coordinates that indicate a position of the last non-zero coefficient within a video data block according to a scan order associated with the block when the scan order comprises a first scan order and code interchanged x and y coordinates that indicate a position of the last nonzero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0230] Figure 10 is a flowchart showing another example of a method to effectively encode position information for the last significant coefficient based on scan order information for a video data block, compatible with the techniques of this description. The techniques in Figure 10 can generally be performed by any processing unit or processor, whether it is implemented in hardware, software, firmware or a combination of them, and when implemented in software or firmware, the hardware correspondent can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques of Figure 10 are described with respect to the entropy coding unit 56 (Figure 2), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 10 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0231] Initially, the entropy coding unit 56 can receive a video data block (1000). For example, the block can be a macroblock or a CU TU, as previously described. The entropy coding unit 56 can also determine x and y coordinates that indicate a position of the last significant coefficient within the block according to a scan order associated with the block (1002), that is, the position information of the last significant coefficient for the block. For example, the scan order can be a scan order used by entropy coding unit 56 to encode the block and it can be one of a series of scan orders used to encode blocks of video data within the encoding system 10 correspondent comprising the video encoder 20 and the video decoder 30. For example, each of the series of scan orders can originate from a common position within the block, such as, for example, the DC position. In addition, as also described above, the x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries.
[0232] The entropy coding unit 56 can also determine whether the x and y coordinates each correspond to a common position within the block (1004). The common position can correspond to the DC position. The entropy coding unit 56 can do the above determination directly, as, for example, as part of determining the x and y coordinates, as described above.
[0233] The entropy coding unit 56 can also encode an indication of whether the x coordinate corresponds to the common position (1006). Likewise, the entropy coding unit 56 can also encode an indication of whether the y coordinate corresponds to the common position (1008). The entropy coding unit 56 can encode each indication using a single binary. For example, the entropy coding unit 56 can encode a first binary, which indicates whether the x coordinate corresponds to the common position (bin1 = "1”, for example) or not (bin1 = "0”), and a second binary , which indicates whether the y coordinate corresponds to the common position (bin2 = "1", for example) or not (bin2 = "0"). In some examples, the entropy coding unit 56 can signal each binary directly in the flow of In other examples, the entropy coding unit 56 may also encode using a context-adaptive entropy coding process in a manner similar to that described above with reference to Figures 7-9, such as, for example, by executing a process CABAC, which includes applying a context model based on a context.
[0234] If the x and y coordinates each correspond to the common position (1010), entropy coding unit 56 may stop encoding the position information of the last significant coefficient based on the scan order information for the block (1024 ). In other words, in cases where the x and y coordinates each correspond to the common position, no additional significant coefficients other than the last (and only) significant coefficient within the block according to the scan order exist within the block. In such cases, the entropy coding unit 56 does not need to encode any position information of the last significant coefficient, or any scan order information or additional significant coefficient position information for the block. In such cases, the entropy coding unit 56 may proceed with other coding tasks, such as, for example, the coding of other syntax elements for the block, or the subsequent block.
[0235] If the x and y coordinates do not each correspond to the common position (1010), the entropy coding unit 56 can also encode information that indicates the scan order (1012), that is, the scan order information for the block. In some examples, in which the scan order includes one of two scan orders used within system 10 to encode blocks of video data, the entropy coding unit 56 can encode scan order information using a single binary. For example, entropy coding unit 56 can encode the single binary to indicate whether the scan order is a first scan order (bin = '' 0 ”, for example) or a second scan order (bin =" 1 In other examples, where the scan order includes one of three scan orders that can be used by system 10 to encode blocks of video data, the entropy coding unit 56 can encode scan order information using between one and two binaries. For example, entropy coding unit 56 can encode a first binary to indicate whether the scan order is a first scan order (such as, for example, bin1 = '' 0 ”, if scan order is the first scan order, and bin1 = "1” otherwise). In case the first binary indicates that the scan order is not the first scan order, entropy coding unit 56 can encode a second binary to indicate whether the scan order is a second scan order (bin2 = "0” , for example) or a third scan order (bin2 = "1", for example). In other examples, other methods can be used to encode scan order information for the block, which include the use of other torque values. In some examples, the entropy coding unit 56 can signal each binary directly in the bit stream. In other examples, the entropy coding unit 56 can also encode each binary using a context-adaptive entropy coding process similar to that described above with reference to Figures 7-9, such as, for example, by executing a process CABAC, which includes applying a context model based on a context. Alternatively, as described earlier, the entropy coding unit 56 may not encode the scan order information for the block when the entropy coding unit 56 uses an adaptive scan order to encode the block.
[0236] In any case, in case the x coordinate does not correspond to the common position (1014), the entropy coding unit 56 can also encode the x coordinate based on the scan order (1016). Likewise, in case the y coordinate does not correspond to the common position (1018), entropy coding unit 56 can also encode the y coordinate based on the scan order (1020). To encode the x and y coordinates, the entropy coding unit 56 can also determine statistics that indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order. In particular, statistics can indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the block according to the scan order, to understand a given value (such as, for example, "0", "1", "2", etc.). In other words, the statistics can indicate the probability that each of the x and y coordinates described above will comprise a given value. In some examples, the coding unit entropy 56 can determine the statistics using position information from the last significant coefficient for previously encoded video data blocks, such as, for example, the x and y coordinate values for the previously encoded blocks.
[0237] In some examples, statistics may vary depending on the scan order. In particular, the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order can vary depending on the scan order. That is, different scan orders can result in different statistics for the position information of the last significant coefficient for the block. Therefore, when encoding the position information of the last significant coefficient for the block based on the statistics, the choice of statistics based, at least in part, on the scan order can result in the use of accurate statistics and, therefore, can enable effective coding. Consequently, the entropy coding unit 56 can encode the x and y coordinates based on the statistics, wherein the entropy coding unit 56 selects the statistics based, at least in part, on the scan order. Therefore, the entropy coding unit 56 can encode the x and y coordinates based on the scan order. In addition, the entropy coding unit 56 can update the statistics based on the x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. For example, the updated statistics can be used to encode position information of the last significant coefficient for subsequently encoded video data blocks in the manner described above.
[0238] In some examples, to encode the x and y coordinates based on the statistics, the entropy coding unit 56 may perform a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates and the scan order. In this example, the entropy coding unit 56 can use the scan order to select the specific context model that includes the statistics. In this way, the entropy coding unit 56 can encode the x and y coordinates based on the scan order. Furthermore, in cases where one coordinate (the y coordinate, for example) is encoded after another coordinate (the x coordinate, for example), the entropy coding unit 56 can encode the coordinate using the value of the other coordinate, coded earlier, as context. That is, the value of a previously coded coordinate of the x and y coordinates can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently encoded, to understand a given value. The entropy coding unit 56 can then use the selected statistics to encode the x and y coordinates by executing a context-adaptive entropy coding process.
[0239] In this example, the x and y coordinates can each be represented using a unary code word that comprises a sequence of one or more binaries, that is, binarized. Thus, to encode the x and y coordinates and the x and y coordinates exchanged based on the statistics, the entropy coding unit 56 can encode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word. In some examples, the entropy coding unit 56 can determine the probability estimates using the corresponding binary values for previously encoded video data blocks, for example, code word binaries that correspond to x and y coordinates and interchanged x and y coordinates for previously coded blocks, such as, for example, as part of determining statistics based on the position information of the last significant coefficient for previously coded blocks, as previously described. In other examples, the entropy coding unit 56 can also update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates, as also described previously. The entropy coding unit 56 can use the probability estimates to encode each binary by performing context-adaptive entropy coding.
[0240] As described earlier, as another example, the entropy coding unit 56 can encode the x and y coordinates by coding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the context model based on at least in part, worth at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy coding unit 56 can encode the one or more binaries of the sequence that correspond to one of the coordinates and the one or more binaries of the sequence that correspond to the other coordinate in an interspersed manner.
[0241] In some examples, before encoding each coordinate, the entropy coding unit 56 may subtract the value "1" from each coordinate to allow the coordinates to be coded more effectively than when using other methods. entropy coding 56 can subtract the value "1" from each coordinate before encoding the coordinate to reduce the amount of information used to encode the coordinates. Likewise, an entropy decoding unit, such as the entropy decoding unit 70 described in more detail in the example in Figure 11, can add the value "1" to each coordinate after decoding the coordinate, to determine the coordinate.
[0242] In some examples, the entropy coding unit 56 may also encode information that indicates the positions of all other significant coefficients within the block according to the scan order (1022), that is, the position information of significant coefficients for the block. As previously described, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators. As also previously described, the position information of significant coefficients can be encoded by coding each significant coefficient indicator of the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0243] In this example, the context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy coding unit 56 can determine the probability estimates using the corresponding significant coefficient indicator values for previously encoded video data blocks. In other examples, the entropy coding unit 56 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will comprise a given value. For example, updated probability estimates can be used to encode position information of significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0244] Finally, the entropy coding unit 56 may stop encoding the position information of the last significant coefficient based on the scan order information for the block (1024). For example, the entropy coding unit 56 can proceed with other coding tasks, such as, for example, the coding of other syntax elements for the subsequent block or block, as described above.
[0245] In this way, the method of Figure 10 represents an example of a method for encoding x and y coordinates that indicate a position of the last non-zero coefficient within a video data block according to a scan order associated with the block when the scan order comprises a first scan order and code interchanged x and y coordinates that indicate a position of the last nonzero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0246] Figure 11 is a flowchart showing another example of a method for effectively decoding position information from the last significant coded coefficient based on scan order information for a block of video data, compatible with the techniques of this description. The techniques in Figure 11 can generally be performed by any processing unit or processor, whether it is (e) implemented in hardware, software, firmware or a combination of them and, when implemented in software or firmware, the hardware correspondent can be presented to carry out instructions for the software or firmware. For purposes of example, the techniques of Figure 11 are described with respect to the entropy decoding unit 70 (Figure 3), although it should be understood that other devices can be configured to perform similar techniques. Furthermore, the steps shown in Figure 11 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without abandoning the techniques of this description.
[0247] Initially, the entropy decoding unit 70 can receive a first signal for a video data block (1100). The block can be a macroblock or a CU CU, as previously described. The entropy decoding unit 70 can also decode the first signal to determine an indication of whether the x coordinate, which indicates a position of the last significant coefficient within the block according to a scan order associated with the block, corresponds to a common position (1102). In the same way, the entropy decoding unit 70 can also receive a second signal for the block (1104). The entropy decoding unit 70 can also decode the second signal to determine an indication of whether the y coordinate, which indicates a position of the last significant coefficient within the block according to the scan order, corresponds to the common position (1106).
[0248] For example, the scan order can be a scan order used by an entropy coding unit, such as entropy coding unit 56, to code the block, and it can be one of a series of scan orders. scanners used to encode blocks of video data in the corresponding encoding system 10, which comprises encoder 20 and decoder 30. For example, each of the series of scan orders may originate in the common position, as described above. The common position can correspond to the DC position.
[0249] In addition, each indication can comprise a single torque. For example, entropy decoding unit 70 can decode the first signal to determine a first binary, which indicates whether the x coordinate corresponds to the common position (bin1 = "1", for example) or not (bin1 = '' 0 ” ), and decode the second signal to determine a second binary, which indicates whether the y coordinate corresponds to the common position (bin2 = “1”, for example) or not (bin2 = “0”). entropy coding 70 can receive each binary directly in the bit stream, that is, the first signal and the second signal can comprise the first binary and the second binary, respectively. In other examples, the entropy decoding unit 70 can decode the first and second signals to determine the respective binaries using a context-adaptive entropy coding process similar to that described above with reference to Figures 7-9, such as, for example, by executing a CABAC process, which includes applying a context model based on context.
[0250] If the x and y coordinates each correspond to the common position (1108), the entropy decoding unit 70 may stop decoding the position information of the last significant coefficient based on the scan order information for the block (1130 ). In other words, in cases where the x and y coordinates each correspond to the common position, no additional significant coefficients other than the last (and only) significant coefficient within the block according to the scan order exist within the block. In such cases, the entropy decoding unit 70 does not need to decode any position information from the last significant coefficient, or any scan order information or additional significant coefficient position information for the block. In such cases, for example, the entropy decoding unit 70 can proceed with other encoding tasks, such as, for example, decoding other syntax elements for the block, or the subsequent block.
[0251] If the x and y coordinates do not each correspond to the common position (1108), the entropy decoding unit 70 can also receive scanned order data for the block (1110). The entropy decoding unit 70 can also decode the scan order data to determine information that identifies the scan order (1112), that is, the scan order information for the block. In some examples, where the scan order includes one of two scan orders used within system 10 to encode blocks of video data, the entropy decoding unit 70 can decode scan order data using a single binary. For example, the single binary can indicate whether the scan order is a first scan order (bin = '' 0 ”, for example) or a second scan order (bin = '' 1 ''). In other examples, in which the scan order includes one of three scan orders that can be used within system 10 to encode blocks of video data, the entropy decoding unit 70 can decode scan order data to determine between one and two binaries. For example, entropy decoding unit 70 can determine a first binary, which indicates whether the scan order is a first scan order (such as, for example, bin1 = '' 0 ”, if the scan order is the first scan order, and bin1 = "1” otherwise). In case the first binary indicates that the scan order is not the first scan order, the entropy decoding unit 70 can determine a second binary, which indicates whether the scan order is a second scan order (bin2 = '' 0 ”, for example) or a third scan order (bin2 =" 1 ”, for example). In other examples, other methods can be used to determine scan order information for the block, which include the use of other torque values. In some examples, the entropy decoding unit 70 can receive each binary directly in the bit stream. That is, the scan order data can comprise one or more binaries. In other examples, the entropy decoding unit 70 can decode the scan order data to determine each binary using a context-adaptive entropy coding process similar to the one described above with reference to Figures 7-9, such as example, by running a CABAC process, which includes applying a context model based on a context. Alternatively, as described earlier, the entropy decoding unit 70 may not receive and decode scan order data for the block when the entropy decoding unit 70 uses an adaptive scan order to decode the block.
[0252] In any case, in the event that the x coordinate does not correspond to the common position (1114), the entropy decoding unit 70 can also receive the encoded x coordinate (1116) and decode the x coordinate based on the scan order (1118). Likewise, in case the y coordinate does not correspond to the common position (1120), the entropy decoding unit 70 can also receive the encoded y coordinate (1122) and decode the y coordinate based on the scan order (1124). As previously described, to decode the encoded x and y coordinates, the decoding unit 70 can also determine statistics that indicate the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order, substantially similar to that described above with reference to the entropy coding unit 56. Statistics can indicate the probability of a coordinate, such as an x or y coordinate, which corresponds to the position of the last significant coefficient within the agreement block with the scan order, understand a given value (such as "0", "1", "2", etc.). In other words, the statistics can indicate the probability of each of the x and y coordinates described earlier some value, the entropy decoding unit 70 can determine the statistics using position of the last significant coefficient for previously encoded video data blocks, such as, for example, the x and y coordinate values for the previously encoded blocks.
[0253] In some examples, statistics may vary depending on the scan order. In particular, the probability that a given position within the block corresponds to the position of the last significant coefficient within the block according to the scan order can vary depending on the scan order. That is, different scan orders can result in different statistics for the position information of the last significant coefficient for the block. Therefore, when decoding the position information of the last significant coefficient coded for the block based on the statistics, the choice of statistics based, at least in part, on the scan order can result in the use of accurate statistics and, therefore, can allow for effective coding. Thus, the entropy decoding unit 70 can decode the x and y coordinates encoded based on the statistics, wherein the entropy decoding unit 70 selects the statistics based, at least in part, on the scan order. Therefore, the entropy decoding unit 70 can decode the encoded x and y coordinates based on the scan order. In addition, the entropy decoding unit 70 can update the statistics based on the x and y coordinates to reflect the likelihood that the respective coordinates will comprise specific values. For example, the updated statistics can be used to decode position information from the last coded significant coefficient into subsequently encoded video data blocks in the manner described above.
[0254] In some examples, to decode the x and y coordinates encoded based on the statistics, the entropy decoding unit 70 may perform a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model that includes statistics based on at least one context. For example, the at least one context can include one of the x and y coordinates and the scan order. In this example, the entropy decoding unit 70 can use the scan order to select the specific context model that includes the statistics. In this way, the entropy decoding unit 70 can decode the coded x and y coordinates based on the scan order. In addition, in cases where an encoded coordinate (the y coordinate, for example) is decoded after another encoded coordinate (the x coordinate, for example), the entropy decoding unit 70 can decode the coordinate using the value of the other coordinated, previously decoded, as context. That is, the value of a previously decoded coordinate of the x and y coordinates can be used to also select statistics within the context model that indicate the probability of the other coordinate, currently decoded, to understand a given value. The entropy decoding unit 70 can then use the selected statistics to decode the x and y coordinates encoded by performing context-adaptive entropy coding.
[0255] The x and y coordinates can each be represented using a unary codeword comprising a sequence of one or more binary, i.e., binary. Therefore, to decode the x and y coordinates encoded based on the statistics, the entropy decoding unit 70 can decode each binary of a codeword that corresponds to a specific coordinate by performing context-adaptive entropy coding. In this example, the statistics included in the context model, which indicate the probability of the coordinate comprising a given value, can include probability estimates that indicate the probability of each binary of the codeword corresponding to the coordinate comprising a given value ("0" or "1", for example). In addition, statistics may include different probability estimates for each code word binary, depending on the position of the respective binary within the code word. In some examples, entropy decoding unit 70 can determine probability estimates using the corresponding binary values for previously encoded video data blocks, such as, for example, code word binaries that correspond to x and y coordinates for the blocks. previously coded, as, for example, as part of the determination of the statistics based on the position information of the last significant coefficient for the previously coded blocks, as previously described. In other examples, the entropy decoding unit 70 can update the probability estimates using the value of each binary, as, for example, as part of updating the statistics based on the x and y coordinates, as also described earlier. The entropy decoding unit 70 can use the probability estimates to decode each binary by performing context-adaptive entropy coding.
[0256] As described earlier, as another example, the entropy decoding unit 70 can decode the x and y coordinates by decoding at least one binary of the sequence that corresponds to one of the coordinates by selecting the statistics from the context model based on at least in part, worth at least one torque, such as a corresponding torque, from the sequence that corresponds to the other coordinate. In addition, the entropy decoding unit 70 can decode the one or more binaries of the sequence that corresponds to one of the coordinates and the one or more binaries of the sequence that corresponds to the other coordinate in an interspersed manner.
[0257] When decoding each coordinate, the entropy decoding unit 70 can add the value "1" to each coordinate to allow the coordinates to be coded more effectively than when using other methods. For example, as also described earlier, a unit entropy coding unit, such as the entropy coding unit 56, can encode the x and y coordinates first by subtracting the value "1" from each coordinate to reduce the amount of information used to encode the coordinates. Therefore, the entropy decoding unit 70 can add the value "1" to each coordinate after decoding the coordinate, to determine the coordinate.
[0258] In some examples, the entropy decoding unit 70 may also receive significant data encoded for the block (1126). In these examples, the entropy decoding unit 70 can decode the significant data to determine information indicating the positions of all other significant coefficients within the block according to the scan order (1128), that is, the position information of significant coefficients for the block. As previously described, the position information of significant coefficients for the block can be represented using a sequence of significant coefficient indicators, as previously described. As also previously described, the position information of significant coefficients can be decoded by decoding each significant coefficient indicator in the sequence by executing a context-adaptive entropy coding process (a CABAC process, for example), which includes applying a context model based on at least one context, where the at least one context can include the position of the indicator within the block according to the scan order.
[0259] In this example, the context model can include probability estimates that indicate the probability that each indicator will comprise a given value ("0" or "1", for example). In some examples, the entropy decoding unit 70 can determine the probability estimates using the corresponding significant coefficient indicator values for previously encoded video data blocks. In other examples, the entropy decoding unit 70 may also update the probability estimates using the value of each indicator to reflect the probability that the indicator will understand a given value. For example, the updated probability estimates can be used to decode position information of significant coefficients for blocks of video data subsequently encoded in the manner described above.
[0260] Finally, the entropy decoding unit 70 may stop decoding the position information of the last significant coded coefficient based on the scan order information for the block (1130). For example, the entropy decoding unit 70 can proceed with other encoding tasks, such as, for example, decoding other syntax elements for the block, or subsequent block, as previously described. In this way, the method of Figure 11 represents an example of a method for encoding x and y coordinates that indicate the position of the last non-zero coefficient within a video data block according to a scan order associated with the block when the scan order comprises a first scan order, and code interchanged x and y coordinates that indicate a position of the last nonzero coefficient within the block according to the scan order when the scan order comprises a second scan order, where the second scan order is different from the first scan order.
[0261] Therefore, according to the techniques of this description, an encoded bit stream can comprise position information of the last significant coefficient for a block of video data, that is, for coefficients associated with the block. In particular, video encoder 20 can encode x and y coordinates that indicate a position of the last significant coefficient within the block according to a scan order associated with the block when the scan order comprises a first scan order, and interchanged x and y coordinate encoder which indicate a position of the last significant coefficient within the block according to the scan order when the scan order comprises a second scan order. For example, the second scan order may be different from the first scan order. The video decoder 30 can, in turn, decode the position information of the last significant coefficient for the block. In particular, the video decoder 30 can decode the x and y coordinates when the scan order comprises the first scan order and decode the swapped x and y coordinates when the scan order comprises the second scan order.
[0262] Accordingly, this description also contemplates a computer-readable medium that comprises a data structure stored therein that includes an encoded bit stream. The encoded bit stream stored in the computer-readable medium can comprise video data encoded using a specific format and encoded information that identifies a position of the last significant coefficient within a video data block according to a scan order associated with the block , represented using x and y coordinates. The specific order in which the x and y coordinates are encoded within the bit stream depends on the scan order associated with the block comprising a first scan order or a second scan order. More specifically, if the scan order comprises the first scan order, the bit stream can include the position information of the last significant coefficient for the block encoded using x and y coordinates. In this case, the position information of the last significant coefficient for the block can be decoded, and the resulting x and y coordinates can be used directly to decode the block. Alternatively, if the scan order comprises the second scan order, then the bit stream can include the position information of the last block-significant coefficient encoded using permuted x and y coordinates. In this case, the position information of the last significant coefficient can be decoded, in which case the exchanged x and y coordinates are also exchanged, and the resulting x and y coordinates can be used to decode the block.
[0263] In one or more examples, the functions described can be implemented in hardware, software, firmware or any combination of them. If implemented in software, functions can be stored in or transmitted via, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium or a communication medium that includes any medium that facilitates the transfer of a computer program from a computer. place to another, according to a communication protocol, for example. In this way, the computer-readable medium can generally correspond to (1) a tangible computer-readable storage medium that is non-transitory or (2) a communication medium, such as a signal or carrier wave. The data storage medium can be any medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and / or data structures for implementing the techniques described in this description. A computer program product may include a computer-readable medium.
[0264] By way of example, and not by way of limitation, such a computer-readable storage medium may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory or any another means that can be used to store a desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is appropriately referred to as a computer-readable medium. For example, if instructions are transmitted from a website, server or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL) or wireless technologies such as infrared, radio and microwave, then coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals or other transient means, but are instead directed to tangible, non-transitory storage media. Disk (disk and disc), as used herein, includes compact disk (CD), laser disk, optical disk, digital versatile disk (DVD), floppy disk and Blu-ray disk, in which disks usually reproduce data magnetically , while discs reproduce data optically with lasers. Combinations of the above elements must also be included within the range of computer-readable media.
[0265] Instructions can be carried out by one or more processors, such as digital signal processors (DSPs), general purpose microprocessors, application-specific integrated circuits (ASICs), field programmable port arrays (FPGAs) or other integrated logic circuits or equivalent discrete. Therefore, the term "processor", as used herein, can refer to any of the foregoing structures or any other structure suitable for the implementation of the techniques described herein. In addition, in some respects, the functionality described herein can be provided within dedicated hardware and / or software modules configured for encoding and decoding or incorporated into a combined codec, in addition, the techniques can be fully implemented in one or more circuits or logic elements.
[0266] The techniques in this description can be implemented on a wide variety of devices or devices, which include a wireless telephone device, an integrated circuit (IC) or a set of ICs (a set of chips, for example). Several components, modules or units are described in this description to emphasize functional aspects of devices configured to perform the described techniques, but do not necessarily require implementation by different hardware units. Instead, as described above, several units can be combined into one codec hardware unit or provided by a collection of interoperable hardware units, including one or more processors, as described above, in conjunction with appropriate software and / or firmware .
[0267] Several examples have been described. These and other examples are within the scope of the following claims.
权利要求:
Claims (9)
[0001]
Method for encoding coefficients associated with a video data block during a video encoding process, the method FEATURED by the fact that it comprises: perform context adaptive entropy coding of x and y coordinates indicating a horizontal position and a vertical position, respectively, of a last non-zero coefficient within the block according to a scan order associated with the block, in which to perform context adaptive entropy coding includes using statistics that are based on at least one context, and where the statistics indicate a probability that each of the x and y coordinates will understand a given value when the coefficients of the video data block are scanned in a first scan order, the probability the x and y coordinates comprise a given value being the same or similar to the probabilities of permuted x and y coordinates comprise a given value when the coefficients are scanned in a second scan order different from the first scan order; where the coefficients of the video data block are scanned in the first scan order, as part of conducting context adaptive entropy coding of the x and y coordinates, encoding the x coordinate and the y coordinate using the statistics so that the x coordinate is encoded with based on the probability of the x coordinate comprising a given value and the y coordinate is coded based on the probability of the y coordinate comprising a given value; and where the coefficients of the video data block are scanned in the second scan order, exchanging the x and y coordinates, and, as part of performing context adaptive entropy coding of the x and y coordinates, encoding the exchanged x coordinate and the exchanged y coordinate using the statistics so that the permuted x coordinate is encoded based on the probability of the y coordinate comprising a given value and the exchanged y coordinate is encoded based on the probability of the x coordinate comprising a given value.
[0002]
Method according to claim 1, CHARACTERIZED by the fact that the first scan order and the second scan order are symmetrical with respect to each other.
[0003]
Method according to claim 1, CHARACTERIZED by the fact that the first scan order comprises a horizontal scan order and the second scan order comprises a vertical scan order, and in which the horizontal scan order and the scanning order vertical sweep originate in a common position within the block.
[0004]
Method, according to claim 1, CHARACTERIZED by the fact that it additionally comprises: encode information that identifies the scan order.
[0005]
Method, according to claim 1, CHARACTERIZED by the fact that it additionally comprises: encode non-zero coefficient values associated with the video data block based on the respective coordinates of the x and y coordinates and the exchanged x and y coordinates; and output the encoded values of the non-zero coefficients in a bit stream.
[0006]
Method, according to claim 1, CHARACTERIZED by the fact that encoding each of the x and y coordinates and the exchanged x and y coordinates comprises encoding a sequence of one or more binaries, in which the statistics indicate probabilities of each of the binaries having a given value.
[0007]
Device for encoding coefficients associated with a video data block during a video encoding process, the device FEATURED by the fact that it comprises: mechanisms for performing adaptive entropy coding of the x and y coordinates indicating a horizontal position and a vertical position, respectively, of a last non-zero coefficient within the block according to a scan order associated with the block, in which to perform adaptive entropy coding of context includes using statistics that are based on at least one context, and where the statistics indicate a probability that each of the x and y coordinates will understand a given value when the coefficients of the video data blocks are scanned in a first scan order, the probability of the x and y coordinates comprising a given value being the same or similar to the probabilities of permuted x and y coordinates comprising a given value when the coefficients are scanned in a second scan order different from the first scan order; mechanisms for coding the coefficients where: where the coefficients of the video data block are scanned in the first scan order, as part of performing context-adaptive entropy coding of the x and y coordinates, encoding the x coordinate and the y coordinate using statistics so that the x coordinate is encoded based on the probability of the x coordinate comprising a given value and the y coordinate is coded based on the probability of the y coordinate comprising a given value; and where the coefficients of the video data block are scanned in the second scan order, exchanging the x and y coordinates, and as part of carrying out adaptive context entropy coding of the x and y coordinates, coding the exchanged x coordinate and the exchanged y coordinate using the statistics so that the permuted x coordinate is encoded based on the probability of the y coordinate comprising a given value and the exchanged y coordinate is encoded based on the likelihood of the x coordinate comprising a given value.
[0008]
Device, according to claim 7, CHARACTERIZED by the fact that it additionally comprises: mechanisms for encoding non-zero coefficient values associated with the video data block based on the respective coordinates of the x and y coordinates and the exchanged x and y coordinates; and mechanisms for outputting the encoded values of the non-zero coefficients in a bit stream.
[0009]
Computer readable medium CHARACTERIZED by the fact that it comprises instructions that, when executed, cause a processor to encode coefficients associated with a block of video data during a video encoding process, in which the instructions make the processor perform the method as defined in any of claims 1 to 6.
类似技术:
公开号 | 公开日 | 专利标题
BR112013013650B1|2021-03-23|METHOD, DEVICE AND MEDIA LEGIBLE BY COMPUTER TO ENCODE COEFFICIENTS ASSOCIATED WITH A VIDEO DATA BLOCK DURING A VIDEO ENCODING PROCESS
ES2705898T3|2019-03-27|Encryption independent of the position of the last significant coefficient of a video block in video encryption
JP6046164B2|2016-12-14|Context determination for coding transform coefficient data in video coding
BR122020003135B1|2021-07-06|METHOD AND DEVICE FOR DECODING VIDEO DATA AND COMPUTER-READABLE NON- TRANSIENT STORAGE MEDIA
US9491469B2|2016-11-08|Coding of last significant transform coefficient
US20120163448A1|2012-06-28|Coding the position of a last significant coefficient of a video block in video coding
US20120014433A1|2012-01-19|Entropy coding of bins across bin groups using variable length codewords
US20120163472A1|2012-06-28|Efficiently coding scanning order information for a video block in video coding
EP2656608A1|2013-10-30|Using a most probable scanning order to efficiently code scanning order information for a video block in video coding
BR112021004429A2|2021-05-25|decoding method and decoding apparatus for predicting motion information
BR112013013651B1|2021-12-28|METHODS FOR ENCODING AND DECODING COEFFICIENTS ASSOCIATED WITH A BLOCK OF VIDEO DATA DURING A VIDEO ENCODING PROCESS, APPARATUS FOR ENCODING COEFFICIENTS ASSOCIATED WITH A BLOCK OF VIDEO DATA DURING A PROCESS OF ENCODING VIDEO AND COMPUTER-LEABLE MEDIA
BR112013007302B1|2022-02-15|ENTROPY ENCODING COEFFICIENTS USING A JOINT CONTEXT MODEL
同族专利:
公开号 | 公开日
US9055290B2|2015-06-09|
HK1185488A1|2014-02-14|
KR101523452B1|2015-05-27|
EP3926833A4|2021-12-22|
EP3709519A1|2020-09-16|
WO2012075193A1|2012-06-07|
ZA201304933B|2014-03-26|
CA2818436A1|2012-06-07|
IL226209A|2017-05-29|
CN103238323B|2016-05-11|
US20140341274A1|2014-11-20|
ES2673939T3|2018-06-26|
CN103238323A|2013-08-07|
KR20130095310A|2013-08-27|
AU2011336601B2|2015-09-24|
US9042440B2|2015-05-26|
JP2013545415A|2013-12-19|
RU2541226C2|2015-02-10|
EP2647204B1|2018-04-04|
EP3709519B1|2021-09-22|
AU2011336601A1|2013-06-06|
JP5746363B2|2015-07-08|
EP3361641B1|2020-05-20|
US20120140814A1|2012-06-07|
RU2013130251A|2015-01-10|
EP3709519B8|2021-11-10|
SG190691A1|2013-07-31|
BR112013013650A2|2016-09-13|
EP3361641A1|2018-08-15|
EP3926833A1|2021-12-22|
EP2647204A1|2013-10-09|
IL226209D0|2013-07-31|
MY161436A|2017-04-14|
CA2818436C|2016-09-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US1679903A|1928-01-12|1928-08-07|Chase Appliance Corp|Anticreeping device for railway rails|
US5295203A|1992-03-26|1994-03-15|General Instrument Corporation|Method and apparatus for vector coding of video transform coefficients|
JPH06205388A|1992-12-28|1994-07-22|Canon Inc|Picture coder|
EP0607484B1|1993-01-20|1998-09-09|Samsung Electronics Co. Ltd.|Method and device for encoding and decoding image data|
EP1802132A3|1995-03-15|2008-11-12|Kabushiki Kaisha Toshiba|Moving picture coding and/or decoding systems|
US5838825A|1996-01-17|1998-11-17|Matsushita Electric Industrial Co., Ltd.|Apparatus for decompressing image data which has been compressed using a linear transform|
US5818877A|1996-03-14|1998-10-06|The Regents Of The University Of California|Method for reducing storage requirements for grouped data values|
US6301304B1|1998-06-17|2001-10-09|Lsi Logic Corporation|Architecture and method for inverse quantization of discrete cosine transform coefficients in MPEG decoders|
US6553147B2|1998-10-05|2003-04-22|Sarnoff Corporation|Apparatus and method for data partitioning to improving error resilience|
EP1041826A1|1999-04-01|2000-10-04|Lucent Technologies Inc.|Apparatus for coding data and apparatus for decoding block transform coefficients|
US6775414B1|1999-11-19|2004-08-10|Ati International Srl|Variable-length code decoder|
US6680974B1|1999-12-02|2004-01-20|Lucent Technologies Inc.|Methods and apparatus for context selection of block transform coefficients|
US20020122483A1|2001-03-02|2002-09-05|Matsushita Electric Industrial Co., Ltd.|Encoding circuit and method|
US6650707B2|2001-03-02|2003-11-18|Industrial Technology Research Institute|Transcoding apparatus and method|
EP1391121B1|2001-03-23|2012-08-15|Nokia Corporation|Variable length coding|
US6856701B2|2001-09-14|2005-02-15|Nokia Corporation|Method and system for context-based adaptive binary arithmetic coding|
EP2222083A2|2001-11-16|2010-08-25|NTT DoCoMo, Inc.|Image coding and decoding method|
US7190840B2|2002-01-07|2007-03-13|Hewlett-Packard Development Company, L.P.|Transform coefficient compression using multiple scans|
JP3866580B2|2002-01-30|2007-01-10|日本電信電話株式会社|Image encoding device, image decoding device, image encoding program, image decoding program, and computer-readable recording medium recording these programs|
US7099387B2|2002-03-22|2006-08-29|Realnetorks, Inc.|Context-adaptive VLC video transform coefficients encoding/decoding methods and apparatuses|
JP4090862B2|2002-04-26|2008-05-28|松下電器産業株式会社|Variable length encoding method and variable length decoding method|
AT343302T|2002-05-02|2006-11-15|Fraunhofer Ges Forschung|CODING AND DECODING TRANSFORMATION COEFFICIENTS IN PICTURE OR VIDEO CODERS|
US7376280B2|2002-07-14|2008-05-20|Apple Inc|Video encoding and decoding|
US7483575B2|2002-10-25|2009-01-27|Sony Corporation|Picture encoding apparatus and method, program and recording medium|
US6646578B1|2002-11-22|2003-11-11|Ub Video Inc.|Context adaptive variable length decoding system and method|
US20050036549A1|2003-08-12|2005-02-17|Yong He|Method and apparatus for selection of scanning mode in dual pass encoding|
US7688894B2|2003-09-07|2010-03-30|Microsoft Corporation|Scan patterns for interlaced video content|
US7379608B2|2003-12-04|2008-05-27|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung, E.V.|Arithmetic coding for transforming video and picture data units|
US7599435B2|2004-01-30|2009-10-06|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Video frame encoding and decoding|
US7522774B2|2004-03-10|2009-04-21|Sindhara Supermedia, Inc.|Methods and apparatuses for compressing digital image data|
US20060078049A1|2004-10-13|2006-04-13|Nokia Corporation|Method and system for entropy coding/decoding of a video bit stream for fine granularity scalability|
NO322043B1|2004-12-30|2006-08-07|Tandberg Telecom As|Procedure for simplified entropy coding|
US8311119B2|2004-12-31|2012-11-13|Microsoft Corporation|Adaptive coefficient scan order|
US7609904B2|2005-01-12|2009-10-27|Nec Laboratories America, Inc.|Transform coding system and method|
JP2006211304A|2005-01-28|2006-08-10|Matsushita Electric Ind Co Ltd|Device, method, and program for video image coding and decoding|
US20060227865A1|2005-03-29|2006-10-12|Bhaskar Sherigar|Unified architecture for inverse scanning for plurality of scanning scheme|
US8599925B2|2005-08-12|2013-12-03|Microsoft Corporation|Efficient coding and decoding of transform blocks|
US20070071331A1|2005-09-24|2007-03-29|Xiteng Liu|Image compression by economical quaternary reaching method|
EP1768415A1|2005-09-27|2007-03-28|Matsushita Electric Industrial Co., Ltd.|Adaptive scan order of DCT coefficients and its signaling|
WO2007043583A1|2005-10-11|2007-04-19|Matsushita Electric Industrial Co., Ltd.|Image encoding device, image decoding device, and method thereof|
JP4918099B2|2005-11-30|2012-04-18|コーニンクレッカフィリップスエレクトロニクスエヌヴィ|Encoding method and apparatus for applying coefficient reordering|
US20090067503A1|2006-01-07|2009-03-12|Electronics And Telecommunications Research Institute|Method and apparatus for video data encoding and decoding|
US7884742B2|2006-06-08|2011-02-08|Nvidia Corporation|System and method for efficient compression of digital data|
US8275045B2|2006-07-12|2012-09-25|Qualcomm Incorporated|Video compression using adaptive variable length codes|
US8942292B2|2006-10-13|2015-01-27|Qualcomm Incorporated|Efficient significant coefficients coding in scalable video codecs|
US7369066B1|2006-12-21|2008-05-06|Lsi Logic Corporation|Efficient 8×8 CABAC residual block transcode system|
US8098735B2|2006-12-21|2012-01-17|Lsi Corporation|Efficient 8×8 CABAC residual block decode|
EP2131596B1|2006-12-27|2012-08-01|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Device and method for coding a transformation coefficient block|
CN102547277B|2007-01-18|2014-12-03|弗劳恩霍夫应用研究促进协会|Device for generating quality scalable video data stream and method thereof|
KR101356733B1|2007-03-07|2014-02-05|삼성전자주식회사|Method and apparatus for Context Adaptive Binary Arithmetic Coding and decoding|
WO2008111511A1|2007-03-14|2008-09-18|Nippon Telegraph And Telephone Corporation|Code quantity estimating method and device, their program, and recording medium|
US7885473B2|2007-04-26|2011-02-08|Texas Instruments Incorporated|Method of CABAC coefficient magnitude and sign decoding suitable for use on VLIW data processors|
US7813567B2|2007-04-26|2010-10-12|Texas Instruments Incorporated|Method of CABAC significance MAP decoding suitable for use on VLIW data processors|
US8571104B2|2007-06-15|2013-10-29|Qualcomm, Incorporated|Adaptive coefficient scanning in video coding|
US8428133B2|2007-06-15|2013-04-23|Qualcomm Incorporated|Adaptive coding of video block prediction mode|
US7535387B1|2007-09-10|2009-05-19|Xilinx, Inc.|Methods and systems for implementing context adaptive binary arithmetic coding|
US8204327B2|2007-10-01|2012-06-19|Cisco Technology, Inc.|Context adaptive hybrid variable length coding|
KR101394153B1|2007-12-11|2014-05-16|삼성전자주식회사|Method and apparatus for quantization, and Method and apparatus for inverse quantization|
US8891615B2|2008-01-08|2014-11-18|Qualcomm Incorporated|Quantization based on rate-distortion modeling for CABAC coders|
US8977064B2|2008-02-13|2015-03-10|Qualcomm Incorporated|Rotational transcoding for JPEG or other coding standards|
KR101375668B1|2008-03-17|2014-03-18|삼성전자주식회사|Method and apparatus for encoding transformed coefficients and method and apparatus for decoding transformed coefficients|
US8179974B2|2008-05-02|2012-05-15|Microsoft Corporation|Multi-level representation of reordered transform coefficients|
EP2154894A1|2008-08-15|2010-02-17|Thomson Licensing|Video coding with coding of the locations of significant coefficients in a block of coefficients|
US7932843B2|2008-10-17|2011-04-26|Texas Instruments Incorporated|Parallel CABAC decoding for video decompression|
EP2182732A1|2008-10-28|2010-05-05|Panasonic Corporation|Switching between scans in image coding|
SG171883A1|2008-12-03|2011-07-28|Nokia Corp|Switching between dct coefficient coding modes|
US8004431B2|2008-12-09|2011-08-23|Qualcomm Incorporated|Fast parsing of variable-to-fixed-length codes|
WO2010070897A1|2008-12-16|2010-06-24|パナソニック株式会社|Moving image encoding method, moving image decoding method, moving image encoding device, moving image decoding device, program, and integrated circuit|
KR20210064398A|2009-01-27|2021-06-02|인터디지털 브이씨 홀딩스 인코포레이티드|Methods and apparatus for transform selection in video encoding and decoding|
JP5004986B2|2009-03-19|2012-08-22|キヤノン株式会社|Scan conversion device, image encoding device, and control method thereof|
JP5302769B2|2009-05-14|2013-10-02|キヤノン株式会社|Scan conversion apparatus, image encoding apparatus, and control method thereof|
WO2010143853A2|2009-06-07|2010-12-16|엘지전자 주식회사|Method and apparatus for decoding a video signal|
KR20100136890A|2009-06-19|2010-12-29|삼성전자주식회사|Apparatus and method for arithmetic encoding and arithmetic decoding based context|
US8294603B2|2009-06-30|2012-10-23|Massachusetts Institute Of Technology|System and method for providing high throughput entropy coding using syntax element partitioning|
JP4650592B1|2009-07-17|2011-03-16|日本電気株式会社|Wavelet transform coding / decoding method and apparatus|
US8619866B2|2009-10-02|2013-12-31|Texas Instruments Incorporated|Reducing memory bandwidth for processing digital image data|
US8477845B2|2009-10-16|2013-07-02|Futurewei Technologies, Inc.|Predictive adaptive scan ordering for video coding|
KR101457894B1|2009-10-28|2014-11-05|삼성전자주식회사|Method and apparatus for encoding image, and method and apparatus for decoding image|
KR20110045949A|2009-10-28|2011-05-04|삼성전자주식회사|Method and apparatus for encoding and decoding image by using rotational transform|
TW201119407A|2009-11-19|2011-06-01|Thomson Licensing|Method for coding and method for reconstruction of a block of an image|
US20120288003A1|2010-01-15|2012-11-15|Thomson Licensing Llc|Video coding using compressive sensing|
US8588536B2|2010-02-22|2013-11-19|Texas Instruments Incorporated|Guaranteed-rate tiled image data compression|
US20110243220A1|2010-04-05|2011-10-06|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding image and method and apparatus for decoding image using adaptive coefficient scan order|
KR101605163B1|2010-04-13|2016-03-22|지이 비디오 컴프레션, 엘엘씨|Coding of significance maps and transform coefficient blocks|
RS56577B1|2010-07-09|2018-02-28|Samsung Electronics Co Ltd|Method for entropy decoding transform coefficients|
US9661338B2|2010-07-09|2017-05-23|Qualcomm Incorporated|Coding syntax elements for adaptive scans of transform coefficients for video coding|
US20120027081A1|2010-07-30|2012-02-02|Cisco Technology Inc.|Method, system, and computer readable medium for implementing run-level coding|
US9154801B2|2010-09-30|2015-10-06|Texas Instruments Incorporated|Method and apparatus for diagonal scan and simplified coding of transform coefficients|
US9042440B2|2010-12-03|2015-05-26|Qualcomm Incorporated|Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding|
US8976861B2|2010-12-03|2015-03-10|Qualcomm Incorporated|Separately coding the position of a last significant coefficient of a video block in video coding|
US20120163472A1|2010-12-22|2012-06-28|Qualcomm Incorporated|Efficiently coding scanning order information for a video block in video coding|
US20120163456A1|2010-12-22|2012-06-28|Qualcomm Incorporated|Using a most probable scanning order to efficiently code scanning order information for a video block in video coding|
US20130343454A1|2011-01-07|2013-12-26|Agency For Science, Technology And Research|Method and an apparatus for coding an image|
US8687904B2|2011-01-14|2014-04-01|Panasonic Corporation|Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus which include arithmetic coding or arithmetic decoding|
US8891617B2|2011-01-18|2014-11-18|Google Inc.|Method and system for processing video data|
US20120207400A1|2011-02-10|2012-08-16|Hisao Sasai|Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus|
US9106913B2|2011-03-08|2015-08-11|Qualcomm Incorporated|Coding of transform coefficients for video coding|
US8861599B2|2011-03-08|2014-10-14|Sony Corporation|Context reduction for last transform position coding|
US20120230418A1|2011-03-08|2012-09-13|Qualcomm Incorporated|Coding of transform coefficients for video coding|
US8861593B2|2011-03-15|2014-10-14|Sony Corporation|Context adaptation within video coding modules|
US8446301B2|2011-04-15|2013-05-21|Research In Motion Limited|Methods and devices for coding and decoding the position of the last significant coefficient|
US9491469B2|2011-06-28|2016-11-08|Qualcomm Incorporated|Coding of last significant transform coefficient|
PL2805419T3|2012-01-20|2017-10-31|Ge Video Compression Llc|Transform coefficient coding and decoding|RS56577B1|2010-07-09|2018-02-28|Samsung Electronics Co Ltd|Method for entropy decoding transform coefficients|
US10091529B2|2010-07-09|2018-10-02|Samsung Electronics Co., Ltd.|Method and apparatus for entropy encoding/decoding a transform coefficient|
US9497472B2|2010-11-16|2016-11-15|Qualcomm Incorporated|Parallel context calculation in video coding|
US9042440B2|2010-12-03|2015-05-26|Qualcomm Incorporated|Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding|
US8755620B2|2011-01-12|2014-06-17|Panasonic Corporation|Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus for performing arithmetic coding and/or arithmetic decoding|
US8687904B2|2011-01-14|2014-04-01|Panasonic Corporation|Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus which include arithmetic coding or arithmetic decoding|
US9106913B2|2011-03-08|2015-08-11|Qualcomm Incorporated|Coding of transform coefficients for video coding|
US20120230418A1|2011-03-08|2012-09-13|Qualcomm Incorporated|Coding of transform coefficients for video coding|
EP3229473B1|2011-04-15|2020-11-25|BlackBerry Limited|Methods and devices for coding and decoding the position of the last significant coefficient|
CN105357541A|2011-06-28|2016-02-24|三星电子株式会社|Method and apparatus for coding video|
US9491469B2|2011-06-28|2016-11-08|Qualcomm Incorporated|Coding of last significant transform coefficient|
EP2740272A4|2011-08-04|2015-04-08|Mediatek Inc|Method and apparatus for reordered binarization of syntax elements in cabac|
TWI590649B|2011-11-08|2017-07-01|三星電子股份有限公司|Apparatus for arithmetic decoding of video|
BR112014011123A2|2011-11-08|2017-05-16|Kt Corp|coefficient scan method and apparatus based on prediction unit partition mode|
US20130188736A1|2012-01-19|2013-07-25|Sharp Laboratories Of America, Inc.|High throughput significance map processing for cabac in hevc|
US10616581B2|2012-01-19|2020-04-07|Huawei Technologies Co., Ltd.|Modified coding for a transform skipped block for CABAC in HEVC|
US9860527B2|2012-01-19|2018-01-02|Huawei Technologies Co., Ltd.|High throughput residual coding for a transform skipped block for CABAC in HEVC|
US9654139B2|2012-01-19|2017-05-16|Huawei Technologies Co., Ltd.|High throughput binarizationmethod for CABAC in HEVC|
US9743116B2|2012-01-19|2017-08-22|Huawei Technologies Co., Ltd.|High throughput coding for CABAC in HEVC|
US9237344B2|2012-03-22|2016-01-12|Qualcomm Incorporated|Deriving context for last position coding for video coding|
US9756327B2|2012-04-03|2017-09-05|Qualcomm Incorporated|Quantization matrix and deblocking filter adjustments for video coding|
KR20130112374A|2012-04-04|2013-10-14|한국전자통신연구원|Video coding method for fast intra prediction and apparatus thereof|
PL2840789T3|2012-04-15|2018-11-30|Samsung Electronics Co., Ltd.|Parameter update method for entropy decoding of conversion coefficient level, and entropy decoding device of conversion coefficient level using same|
US9124872B2|2012-04-16|2015-09-01|Qualcomm Incorporated|Coefficient groups and coefficient coding for coefficient scans|
KR102132917B1|2012-05-25|2020-07-10|선 페이턴트 트러스트|Video image coding method, video image decoding method, video image coding device, video image decoding device, and video image coding-decoding device|
CN109151473B|2012-05-25|2021-05-28|威勒斯媒体国际有限公司|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device|
WO2013175736A1|2012-05-25|2013-11-28|パナソニック株式会社|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device|
EP2858357A4|2012-06-04|2015-04-29|Panasonic Ip Corp America|Video image encoding method, video image encoding device, video image decoding method, and video image decoding device|
CN110996100A|2012-10-01|2020-04-10|Ge视频压缩有限责任公司|Decoder, decoding method, encoder, and encoding method|
US10003804B2|2012-12-27|2018-06-19|Nec Corporation|Video coding device using quantizing an orthogonal transform coefficient|
EP3010230A4|2013-06-11|2016-11-16|Nec Corp|Video coding device, video coding method, and video coding program|
US9445132B2|2013-09-09|2016-09-13|Qualcomm Incorporated|Two level last significant coefficientposition coding|
US9215464B2|2013-09-19|2015-12-15|Blackberry Limited|Coding position data for the last non-zero transform coefficient in a coefficient group|
CN104853196B|2014-02-18|2018-10-19|华为技术有限公司|Decoding method and device|
CN111064965A|2014-03-16|2020-04-24|Vid拓展公司|Method and system for signaling for lossless video coding|
JP6341756B2|2014-05-26|2018-06-13|キヤノン株式会社|Image processing apparatus and image processing apparatus control method|
EP3306930A4|2015-09-10|2018-05-02|Samsung Electronics Co., Ltd.|Encoding device, decoding device, and encoding and decoding method thereof|
CN106657961B|2015-10-30|2020-01-10|微软技术许可有限责任公司|Hybrid digital-analog encoding of stereoscopic video|
RU2606370C1|2015-12-03|2017-01-10|Общество с ограниченной ответственностью "РОБОСИВИ" |Method for segmentation of laser scans and system for its implementation|
US10602192B2|2016-02-04|2020-03-24|Mediatek Inc.|Methods and apparatuses for performing entropy encoding and entropy decoding with size determination of at least one bitstream portion|
EP3270594A1|2016-07-15|2018-01-17|Thomson Licensing|Method and apparatus for advanced cabac context adaptation for last coefficient coding|
KR20180025283A|2016-08-31|2018-03-08|주식회사 케이티|Method and apparatus for processing a video signal|
US10735023B2|2017-02-24|2020-08-04|Texas Instruments Incorporated|Matrix compression accelerator system and method|
US10810281B2|2017-02-24|2020-10-20|Texas Instruments Incorporated|Outer product multipler system and method|
US10817587B2|2017-02-28|2020-10-27|Texas Instruments Incorporated|Reconfigurable matrix multiplier system and method|
US11086967B2|2017-03-01|2021-08-10|Texas Instruments Incorporated|Implementing fundamental computational primitives using a matrix multiplication accelerator |
US10523968B2|2017-09-18|2019-12-31|Google Llc|Coding of last significant coefficient flags|
WO2019199838A1|2018-04-12|2019-10-17|Futurewei Technologies, Inc.|Reducing context switching for coding transform coefficients|
US11128866B2|2018-10-18|2021-09-21|Qualcomm Incorporated|Scans and last coefficient position coding for zero-out transforms|
WO2020149594A1|2019-01-15|2020-07-23|엘지전자 주식회사|Image decoding method for coding residual information on basis of high frequency zeroing in image coding system, and device therefor|
CN110708552B|2019-08-27|2021-12-31|杭州海康威视数字技术股份有限公司|Decoding method, encoding method and device|
法律状态:
2018-02-14| B25A| Requested transfer of rights approved|Owner name: VELOS MEDIA INTERNATIONAL LIMITED (IE) |
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H03M 7/40 (2006.01) |
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-12| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-02-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-03-23| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 30/11/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US41974010P| true| 2010-12-03|2010-12-03|
US61/419,740|2010-12-03|
US201061426426P| true| 2010-12-22|2010-12-22|
US201061426360P| true| 2010-12-22|2010-12-22|
US201061426372P| true| 2010-12-22|2010-12-22|
US61/426,426|2010-12-22|
US61/426,360|2010-12-22|
US61/426,372|2010-12-22|
US13/303,015|2011-11-22|
US13/303,015|US9042440B2|2010-12-03|2011-11-22|Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding|
PCT/US2011/062715|WO2012075193A1|2010-12-03|2011-11-30|Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding|
[返回顶部]